Prosecution Insights
Last updated: April 19, 2026
Application No. 17/324,881

STABILIZATION SYSTEM FOR NAVIGATION CAMERA IN COMPUTER-ASSISTED SURGERY

Final Rejection §101§103§112
Filed
May 19, 2021
Examiner
ROBINSON, NICHOLAS A
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Orthosoft Ulc
OA Round
8 (Final)
49%
Grant Probability
Moderate
9-10
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
64 granted / 131 resolved
-21.1% vs TC avg
Strong +55% interview lift
Without
With
+54.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
51 currently pending
Career history
182
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
30.6%
-9.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The Amendment filed on 02/27/2026 is acknowledged and has been entered. Claims 1, 4, 6-7, 9-10, 14, & 26 have been amended. Claims 15-20 are canceled. Presently, Claims 1-14, 21-26 remain pending and are hereinafter examined on the merits. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Previous rejections under 35 U.S.C. 112(b) and claim objections are withdrawn in view of the amendments filed on 02/27/2026. Previous claim objections are withdrawn in view of the amendments filed on 02/27/2026. Applicant's arguments filed with respect to rejection under 35 USC 101 have been fully considered but they are not persuasive. The Applicant primarily argues that the claims are analogous to those found eligible in Thales and therefore are not directed to the abstract idea under Step 2A, Prong One. This comparison is not commensurate with the actual claim language. In Thales, the claims recited a specific configuration of inertial sensors and a particular manner of using signals from those sensors to determine relative orientation, which the courts found to be a non-conventional technological solution to a technological problem. In contrasts, the present claims do not recite any specific sensor configuration, any particularized algorithm for determining orientation, and any technical improvement in how image data is processed. Rather the claims broadly recite detecting a change in orientation, quantifying that change as an angular variation, and adjusting tracking data accordingly. These operations remain at a high level of generality and are functionally described without any specific implementation details. As such the claims are directed to a mental process (i.e., observation, estimation, and adjustment), even when performed using generic computer components, as previously set forth in the rejection. The Applicant’s reliance on Thales also introduces considerations not reflected in the claims. Specifically, the Applicant asserts that the claims provide improved accuracy and a non-conventional technique for tracking. However, the claims do not recite any specific mechanism, algorithm, or structural arrangement that would provide such improvements, or any specific implementation details. There is not requirement in the claims for how the image processing is performed, how the angular variation is computed beyond general quantification, or how the adjustment of tracking data is carried out in a technologically specific manner. Accordingly, the asserted advantageous and alleged improvements are not supported by the claim language and cannot establish eligibility. With respect to Step 2A, Prong Two, the Applicant does not substantially address the Examiner’s determination that the additional elements merely implement the abstract idea using generic components and perform data gathering and output functions. The recited image capture device, processing unit, memory, and display remain described at a high level of generality and perform ordinary functions. The Applicant’s arguments do not identify any claim element that integrates the alleged abstract idea into a practical application through a specific technological improvement or a meaningful limitation on the abstract idea. Further, the Applicant, again, states that the claimed tracking occurs in “real-time” during a surgical procedure. However, this characterization is already address in the rejection and remains insufficient to confer eligibility. Regarding the real-time data gathering evaluated at PRONG TWO is still data-gathering. data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). The claim language itself does not impose any specialized technological improvements of “real-time” beyond the mere recitation of that term. “Real-time” simply denotes the steps are performed while the surgical procedure is ongoing; it does not specify any technological requirement that would preclude the abstract idea identified in PRONG ONE. Therefore, the real-time nature of the claimed steps does not transform the identified abstract idea in PRONG ONE into a technological improvement. Thus, a reliance on the need for high precision does not negate its nature of being directed to data-gathering. Accordingly, the Applicant arguments are not persuasive, the 35 USC § 101 rejection is maintained. Applicant's arguments filed with respect to rejection under 35 USC 103 have been fully considered but they are not persuasive. Applicant’s arguments focus on Lee’s preference for stationary camera and characterize Lee as required the camera to remain fixed during operation. The argument is similar to the filed on 09/16/2025, directed to any prior art with a stationary camera cannot be moved and/or cannot be modified to be moved. For similar reasons, these arguments are not persuasive. The claims do not recite any requirement that the camera muse be movable during use, not do they exclude system in which the camera is stationary for certain measurements. The rejection does not rely on Lee for teaching a rotatable coupling to a movable arm, as expressly acknowledged in the Office Action, but instead relies on Coiseur for that feature. Accordingly, the Applicant reliance on Lee alleged lack of a moveable or rotatable camera is not responsive to the actual combination relied upon. The Applicant argument that Lee does not store a first orientation is not persuasive. The rejection explained that Lee’s use of initial values and fixed coordinate relationship, ¶0070 requires that the system establish and utilize an initial reference state for coordinate conversation. This constitutes storing and otherwise maintaining a first orientation or equivalent reference frame. The claims don’t preclude this interpretation. The Applicant argument relies on a narrower interpretation of the claim to require an explicit storage step or a particular form of orientation data that is not recited in the claims. The claims broad recitation of “storing a first orientation” and Lee’s use of a fixed initial reference for coordinate transformations reasonably meets this limitation. Applicant also argues the Lee does not “detect”, “quantify” or “adjust” based on changes in camera orientation. This argument is not persuasive because it mischaracterizes the teachings relied upon. As set forth in the rejection , Lee’s IMU measures angular velocity of the camera unit, which directly corresponds to detecting and quantifying changes in orientation. The rejection does not rely on Lee to explicitly perform all the downstream adjustments steps in isolation, but rather a combined teachings of Lee and Abhari. With respect to Abhari, the Applicant asserts that the deficiencies of Lee remain even when combined. However, this arguments fails to address the specific teachings relied upon. Abhari explicitly taught updating displayed representations based on changes in camera field of view, which occur due to changes in camera position and orientation, ¶Abstract, ¶0004-0005, ¶0050, ¶0057-0058, ¶0082-0084, ¶0088, ¶0090-0091, ¶0094, ¶0107, ¶0109. This directly corresponds to adjusting tracking data to account for angular variation of the image capture device. Applicant further contends that modifying Lee would “eviscerate[e]” its purpose. This argument is not persuasive because Lee itself contemplates camera movement, ¶0039-0040 and measures angular velocity to account for such movement. The proposed combination merely utilizes known techniques, as taught by Abhari to update tracking based on such movement, yielding predictable results such as maintaining spatial accuracy. The modification would not and does not render Lee inoperable for its intended purpose but rather enhances its functionality in a known manner. Finally, the Applicant asserts that the Office has not shown a teaching of storing a first orientation or detecting and quantifying orientation in the combination. This is not accurate. The rejection relies on Lee for establishing and maintaining an initial reference (i.e., first orientation) and for measuring angular velocity (i.e., detecting and quantifying orientation change), and on Abhari for adjusting the tracking data based on changes in camera orientation. Applicant arguments are directed to requiring these limitations to be disclosed in a single reference. For these reasons, the rejections under 35 USC § 103 are maintained. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “processing unit” for tracking in claims 1, 2 & 26 and “image-capture device” for quantifying changes of the tool and/or bone and to perform image processing in claims 1-6, 9-10, 21-23, and 25-26, respectively, invoked 35 U.S.C. 112(f). The terms “device” and “unit” is a non-structural generic placeholder that does not include any specific structure for performing the accompany functions. See MPEP 2181.I.A: The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6: "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Massachusetts Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: Claim 1: “processing unit” refers to a processing function for receiving image data obtained with the image capture device refer to 0070 of the specification: “The CAS controller 40 has a processor unit 40′ (a.k.a., processing unit, such as a processor, CPU, ASIC, etc)”. The term “image-capture device” refers to a camera in paragraph 0072 of the specification: “image-capture device 52 is one of type featuring retro-reflective spheres (e.g., Navitrack® system), other image based tracking technologies may be used, such as depth cameras, 3D cameras, etc,” Therefore, the “processing unit” refers to a generic processor the performs the recited function and the “image-capture device” refers to a image capturing device that performs the recited function. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14, and 21-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 of the subject matter eligibility test (see MPEP 2106.03). Claims 1-13, 21-25 are directed to an “apparatus” which describes one of the four statutory categories of patentable subject matter, i.e., a machine. Claim 26 is directed to an “apparatus” which describes one of the four statutory categories of patentable subject matter, i.e., a machine. Step 2A of the subject matter eligibility test (see MPEP 2106.04). Prong One: Claims 1 recite (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows: “ detecting a change in orientation of the image-capture device from the first orientation to a second orientation, quantifying the change in the orientation of the image-capture device from the first orientation to the second orientation, the change in the orientation quantified by an angular variation of the image-capture device about one or more degrees of freedom, and tracking the at least one tool and/or the at least one bone with the image-capture device at the second orientation by adjusting tracking data, derived from the image processing of the received image data, of the at least one tool and/or the at least one bone to account for the angular variation of the image-capture device. “ Claims 26 recite (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows: “ detecting a change in orientation of the image capture device from the first orientation to a second orientation, quantifying the change in the orientation of the image capture device from the first orientation to the second orientation, the change in the orientation quantified by an angular variation of the image capture device about one or more degrees of freedom, and the tracking of the at least one tool and/or of the at least one bone with the image capture device at the second orientation by adjusting tracking data of the at least one tool and/or the at least one bone to account for the angular variation of the image capture device; “ In claims 1 and 26, the above recited steps can be practically performed in the human mind to perform the steps. Specifically, the detecting, quantifying, and tracking recite concepts that can be practically performed in the human mind because they describe mental acts of observation, recognition, estimation, and cognitive adjustment that at its core are mental concepts. A human observer could mentally perform the step of detecting a change in orientation of an image-capture device simply by visually noting that the device has been rotated or shifted relative to a previous position. The quantifying step could likewise be accomplished through mental estimation, by comparing the apparent angular difference between the first and second orientation and mentally assigning a value or degree of variation such as “about 20 degrees”. Similarly, tracking the position of a tool or bone at the second orientation by adjusting prior tracking information can be achieved mentally by envisioning or redrawing the relative spatial position of those objects to compensate for the perceived angular change. In other words, the observer could mentally adjust their understanding of the relative position of the tracked tool or bone to compensate for the new viewpoint, the claim just states, “adjust tracking data”, ensuring continued accurate awareness of the tracked objects’ locations. There is nothing recited in the claim to suggest an undue level of complexity in how the above detecting, quantifying and tracking steps are performed. Thus, each of the above limitations reflect operations of observation, comparison, estimation, and mental adjustment, activities that at its core can be performed entirely within the human mind (i.e., an abstract idea). Prong Two: Claims 1 and 26 do not include additional elements that integrate the mental process into a practical application. This judicial exception is not integrated into a practical application. In particular, the claims recites (1) additional steps of “an image-capture device configured to be rotatably coupled to a movable arm; a processing unit configured to receive image data obtained with the image-capture device; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: tracking the at least one tool and/or the at least one bone in real-time with the image-capture device by performing image processing on the received image data; with during a surgical procedure on a patient, storing the first orientation of the image-capture device; and [...] tracking the at least one tool and/or the at least one bone in real-time with the image-capture device at a first orientation;“ (claim 1); “a tracking system having an image capture device configured to be rotatably coupled to a movable arm via one or more actuators; a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: the tracking of the at least one tool and/or of the at least one bone in real-time with the tracking system performing image processing, with the image capture device being at a first orientation during a surgical procedure on a patient, and during the surgical procedure on the patient storing the first orientation of the image capture device;” (claim 26) and (2) further an addition step “outputting, based on the tracking, signals to a display device to cause the display to depict the position of the at least one tool and/or the at least one bone; wherein during the surgical procedure on the patient, the tracking comprises:” (claim 1); “based on the detected change in orientation and the adjusted tracking data, moving, via the one or more actuators, the image capture device from the second orientation to the first orientation; and continuing to track the at least one tool with the image capture device at the first orientation.” (claim 26) The steps in (1) represent merely data gathering or pre-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality with conventionally used tools (see below Step IIB for further details). Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). The steps in (2) represent merely data gathering or post-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality with conventionally used tools (see below Step IIB for further details). Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). In addition; displaying information on a display cannot performed in the human mind; hence, it is not part of the abstract idea. However, it is not a practical application either. It is merely an insignificant post-solution activity. In addition, the abstract idea is not applied, relied on, or used in a meaningful way. No improved to the technology is evident, and the determined visualization of context is not outputted in any way such that the practical benefit is realized. In regards to the generic recitation of insignificant post-solution activity directed to moving the image-capture device via one or more actuators “based on” the detected change in orientation and adjusted tracking data does not transform the abstract idea into a practical application because the abstract idea is not applied in a meaningful way. There are no steps or specific algorithmic transformation or any control parameters that describe how the detecting the change in orientation and adjusted tracking are applied to perform the steps of moving the actuators. Regarding the processor language written at such a high level of generality of structural limitations, the processor language amounts to a generic computer component with mere instructions to implement the abstract idea on a computer. The limitation is generically recited in the claims such that it amounts to a data gathering steps that would not integrate a judicial exception or provide significantly more. Regarding the real-time data gathering evaluated at PRONG TWO is still data-gathering. data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). The claim language itself does not impose any specialized technological improvements of “real-time” beyond the mere recitation of that term. “Real-time” simply denotes the steps are performed while the surgical procedure is ongoing; it does not specify any technological requirement that would preclude the abstract idea identified in PRONG ONE. Therefore, the real-time nature of the claimed steps does not transform the identified abstract idea in PRONG ONE into a technological improvement. Thus, a reliance on the need for high precision does not negate its nature of being directed to data-gathering. As a whole, the additional elements merely serve to gather and feed information to the abstract idea and to output a notification based on the abstract idea, while generically implementing it on conventionally used tools. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. No improvement to the technology is evident, and the estimated information is not outputted in any way such that a practical benefit is realized. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Further, there is no evidence of record that would support the assertion that this step is an improvement to a computer or technological solution to a technological problem. Ultimately, the Applicant’s describe improvement in the process of using optical correction techniques, but this is not an improvement in the function of a computer or other technology (See MPEP 2106.05(a)(ii); “the court determined that the claimed user interface simply provided a trader with more information to facilitate market trades, which improved the business process of market trading but did not improve computers or technology”; See MPEP 2106.04(d)(1); 2106.05(a); and 2106.05(f)). The claims are directed to the abstract idea. Also, there does not appear to be any particular structure or machine, treatment or prophylaxis, transformation, or any other meaningful application that would render the claim eligible at step 2A, prong 2. Step 2B of the subject matter eligibility test (see MPEP 2106.05). Claims 1 and 26 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claims recite additional steps of receiving image data obtained from a image-capture device and tracking at least a tool and/or bone in real-time with the image-capture device by performing image processing on the received image data during a surgical procedure, outputting, based on tracking signals to a display to depict the position of the at least tool and/or bone. storing the first orientation of the image capture device and These steps represents mere data gathering, data outputting or pre/post/extra-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality. Furthermore, as discussed above, limitations with respect to the processor languages/terms, respectively, amount to mere instructions to implement the abstract idea on a computer. As discussed with respect to Step 2A Prong Two, the additional elements in the claims amount to no more than insignificant extra solution activity and mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B and does not provide an inventive concept. The data gathering steps that were considered insignificant extra-solution activity in Step 2A Prong Two, have been re-evaluated in Step 2B and determined to be well-understood, routine, conventional activity in the field. As an evidence, Richter et al (US 2020/0315711 A1) discloses: ¶0002, ‘Traditionally, accurate operation of surgical instruments including their movement during surgical procedures is aided by existing navigation systems. These navigation systems use cameras, sensors, and other medical imaging devices such as X-ray and C-arm imaging systems to measure and track the actual and relative position of surgical instruments and of a patient's anatomy during an operation. A number of different tracking modalities can be utilized, including the above-mentioned optical tracking systems, as well as electromagnetic tracking systems (that utilize, e.g., coils and field generators) and others known in the art. A display device of the navigation system that is provided in the surgical room is used to output or display images and other data based on the measured instrument and patient anatomy information, such as the position of the instrument relative to the patient's anatomy.’ As an evidence, Yadav et al (US 2019/0011709 A1) discloses: ¶0030, ‘Algorithms for calculating orientation of an object (e.g., a camera) using inertial sensor measurements are known in the art. This disclosure contemplates using any known process to calculate orientation of the cameras 120. Additionally, algorithms for correcting a real-time image based on orientation of a camera are known in the art. This disclosure contemplates using any known process to correct a real-time image based on orientation of the cameras.’ For these reasons, there is no inventive concept. The claim is not patent eligible. Even when viewed as a whole, nothing in the claim adds significantly more to the abstract idea. Dependent Claims Claim 2: “wherein the computer-readable program instructions executable by the processing unit further include automating a return of the image-capture device from the second orientation to the first orientation.” does not add significantly more than the judicial exception because it can be accomplished mentally by visually looking at an image and thinking about the change. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 3: “wherein the automating the return of the image-capture device to the first orientation includes actuating a movement of the image- capture device in the one or more degrees of freedom, the one or more degrees of freedom comprising one or two rotational degrees of freedom.” is an extra-solution activity, see MPEP 2106.05(g). Moving of an image capture device is an insignificant application of the system and does not add significantly more than the judicial exception. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 4: “wherein: the image-capture device includes one or more actuators; and the automating the return of the image-capture device to the first orientation is performed, via the one or more actuators, automatically after the detecting and the quantifying steps.” is an extra-solution activity, see MPEP 2106.05(g). See above, the moving of an image capture device is an insignificant application of the system. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 5: “wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the second orientation includes factoring in an adjustment of a point of view of the image-capture device.” can be mental process such that the user can visually track movement. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 6: “including alerting a user of the detecting of the change in the orientation of the image-capture device.” this is a well understood concept and does not add significantly more than the judicial exception. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 7: “including requiring the user to quantify and/or validate the change in the orientation.” having user input is a well understood concept. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 8: “including pausing the tracking of the at least one tool and/or of the at least one bone between the detecting and the quantifying steps, and resuming the tracking after the quantifying.” , this does not add significantly more than the judicial exception and can be a mental step as the user can stop looking at the tool to mentally perform calculations. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 9: ” wherein the detecting of the change in the orientation of the image-capture device from the first orientation includes detecting the change in the orientation when the image-capture device is coupled to a stationary structure.can read as a mental step as the guiding surgeon can visually see a change in the orientation of an image capture device. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 10: “wherein the detecting of the change in the orientation includes continuously monitoring the orientation of the image-capture device.” can read as a mental step as the surgeon can visually monitor the image capture device. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 11:” including outputting the tracking of the at least one tool and/or of the at least one bone.does not add significantly more than the judicial exception. Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 12: “wherein the outputting the tracking of the at least one tool and/or of the at least one bone includes outputting the tracking graphically on a graphic-user interface..” MPEP 2106 states that collecting data, generating data and displaying the result does not amount to significantly more nor a practical application. Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 13: wherein the tracking of the at least one tool and/or of the at least one bone includes tracking the at least one tool relative to the at least one bone.” does not add significantly more than the judicial exception and it can be accomplished mentally by visually looking at an image or the tool itself. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 14: “wherein the tracking of the at least one tool includes tracking the at least one tool as moved by a surgical robot, and further including blocking movement of the surgical robot when detecting the change in the orientation.” does not add significantly more than the judicial exception. Further, tracking the tool can be done mentally by visually looking at an image and blocking the movement of the surgical robot is an insignificant application. Claim 21 recites additional limitations directed to the first orientation, which does not add significantly more than the judicial exception. One of ordinary skill in the art would be able to visually interpret an orientation of at least one tool and/or bone and mentally note the orientation during the surgical procedure. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Claim 22 recites additional limitations directed to the first orientation, which does not add significantly more than the judicial exception. One of ordinary skill in the art would be able to visually interpret an orientation of at least one tool and/or bone and mentally note the orientation during the surgical procedure and manual adjust the position and orientation based on said noted orientation. Claim 23, recites additional steps of the identified abstract idea. One of ordinary skill in the are could detect and quantify and track the continuously. Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). Claim 24: recites additional steps of performing the surgical operation. The surgical workflow of detecting, quantifying, and tracking, can be halted if the observer notices unwanted movement. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Clam 25: recites additional steps of performing orientation tracking. These steps can be performed by the human by mentally noting the position of the tool and/or camera with respect to a reference point while tracking the tool and/or camera. Thus, the recitation does not integrate into a practical application or amount to significantly more to the abstract idea. Taken alone and in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way. They also do not add anything significantly more than the abstract idea. Their collective functions merely provide computer/electronic implementation and processing, and no additional elements beyond those of the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. There is no indication that the combination of elements improves the functioning of a computer, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 and 21-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Claim 1, line 15, “the position”. There is insufficient antecedent basis for this limitation in the claim, as required by MPEP 2173.05(e). Accordingly, proper ordinal numbering and/or antecedent basis is required. The dependent claims of the above rejected claims are rejected due to their dependency. Claim Objections The following claims are objected to because of the following informalities and should recite: Claim 26: line 25, “change in the orientation. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5, 9-10, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1). Claim 1: Lee discloses: A system for tracking at least one tool and/or at least one bone in real-time during computer-assisted surgery, comprising: (¶Abstract, ¶0001, FIG. 1, Claim 1) an image-capture device (camera unit 300 -the camera unit 300 may be, for example, a stereo camera, ¶0038) is configured to be rotatably coupled to an arm; -The system of Lee includes a camera unit and a tracking processing unit, ¶Abstract. The core elements of tracking are through image processing. The camera unit photographs a maker fixed to the surgical instrument and outs a marker image, ¶Abstract, Claim 1, ¶0010, ¶0038. -The overall tracking system is designed to continuously track the measurement object regardless of movement of an optical tracker, ¶0007-0009. The first inertia measurement unit 300 is fixed to the camera unit (200), ¶0010, ¶0039, Claim 1. This unit measures and outputs first inertia data comprising first accelerated velocity and first angular velocity, ¶Abstract, ¶0010, Claim 1. The measurement of angular velocity suggest that rotation is possible and measured, ¶0040. The camera unit moves, the first IMU along with it, ¶0039. The camera unit is in a stopped state, the first angular velocity has a value of zero, ¶0040. This contrast indicates that if the camera unit is moving (i.e., not stopped), the angular velocity will be non-zero, meaning it is configured to be rotatable. In other words, angular velocity is the measurement of rotational speed. The camera unit 200 is attached to disposed on a separate holding means, ¶0038, ¶0075, (i.e., an arm), so that the camera unit can easily photograph the measurement object. a processing unit configured to receive image data obtained with the image-capture device; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: -The tracking processing unit 500 a non-transitory computer readable medium that performs complex, real-time computational functions, ¶0044-0046, such as imaging processing, ¶0045-0051, data analysis and complex coordinate system conversions. The tracking processing unit 500 is configured to primarily extract the position and posture of the surgical instrument using the marker image by imaging processing, ¶Abstract, ¶0010, Claim 5, Claim 1-‘a tracking processing unit configured to primarily extract a position and posture of the measurement object using the marker image, and secondarily extract the position and posture of the measurement object using the first inertia and the second inertia.’. This means the process involves the tracking processing unit analyzing the marker image by a function configured to receive image data obtained with the image-capture device. tracking the at least one tool and/or the at least one bone in real-time with the image-capture device by performing image processing, with the image-capture device being at a first orientation during a surgical procedure on a patient, -Lee explicitly teaches tracking a surgical instrument (tool) also referred to as a measurement object using image processing performed on image data received from a camera unit, ¶Abstract, ¶0033-0035. Camera unit photographs a marker fixed to the measurement object and outputs a marker image, ¶Abstract, Claim 1, ¶0010, ¶0038, Claim 5. The camera unit may be, for example, a stereo camera, ¶0038. -To calculate the position and posture of the measurement object, the tracking processing unit analyzes the positions, sizes, etc, of position points of the marker from the marker image, ¶0044-0045, ¶0050-0051. -The system requires the ability to accurately track the position and posture of the surgical instrument in real time, ¶0002. The tracking method steps, including photographing the marker, and extracting the position and posture are also successfully performed in real time, ¶0056-0058. ¶0056, ‘Further, all of S100, S200, S300, S400, and S500 may be successively performed in real time [emphasis added]’. storing the first orientation of the image-capture device; and -Implicitly taught based on the architecture and methods taught by Lee. Lee teaches ¶0070, ‘since the first inertia measurement unit 300 is fixed to the camera unit 200 and the second inertia measurement unit 400 is fixed to the marker 100 or the measurement object, each of a conversion determinant according to the first conversion process and a conversion determinant according to the third conversion process may have a constant value. Therefore, the tracking processing unit 500 may perform a coordinate conversion by using a given initial value, as it is, without performing a calculation for obtaining the conversion determinants according to the first and third conversion processes.’ tracking the at least one tool and/or the at least one bone in real-time with the image-capture device at a first orientation; -The system requires the ability to accurately track the position and posture of the surgical instrument in real time, ¶0002. The tracking method steps, including photographing the marker, and extracting the position and posture are also successfully performed in real time, ¶0056-0058. ¶0056, ‘Further, all of S100, S200, S300, S400, and S500 may be successively performed in real time [emphasis added]’. -The camera unit’s orientation, referred to as the first orientation is addressed through the concept of the camera being at a stopped state, which is the desired state for image based tracking, ¶0039-0040. detecting a change in orientation of the image-capture device from the first orientation to a second orientation, -The camera unit 200 is part of the optical tracker, has a first IMU 300 fixed to it, FIG. 1. This IMU 300 is configured to measured and output first inertia data, ¶Abstract, Claim 1. The first inertia data comprises first angular velocity, ¶Abstract, ¶0010. Angular velocity is the measurement of rotational speed, which directly relates to change in orientation. Hence, the monitoring of the first angular velocity outputted by the first IMU 300, the system can detect if the camera unit 200 has rotated or changed its orientation (i.e., from a first orientation to a second orientation). quantifying the change in the orientation of the image-capture device from the first orientation to the second orientation, the change in the orientation quantified by an angular variation of the image-capture device about one or more degrees of freedom, and -Lee teaches quantifying the change in the orientation of the camera unit (i.e., image-capture device), ¶Abstract. This quantification is implicitly done through the measurement of angular velocity about degree of freedom, which is a function of inertial measurements units. -The camera unit 200 is part of the optical tracker, has a first IMU 300 fixed to it, FIG. 1. This IMU 300 is configured to measured and output first inertia data, ¶Abstract, Claim 1. The first inertia data comprises first angular velocity, ¶Abstract, ¶0010. Angular velocity is the measurement of rotational speed, which directly relates to change in the orientation. Hence, the monitoring of the first angular velocity outputted by the first IMU 300, the system can detect if the camera unit 200 has rotated or changed its orientation (i.e., from a first orientation to a second orientation). -Angular velocity is the rate of change of the angle or orientation of the camera, ¶0046. Therefore, the first angular velocity directly quantifies the rotational movement of the camera unit 200 about one or more degrees of freedom. -Lee establishes the measurement and conditions under which a non-zero angular velocity is recorded, indicating that rotational movement has occurred. The core design of Lee is directed to detecting changes in movement of the camera for re-correction in situations of unintentional change of the camera unit, during a surgical operation, ¶0006. Specifically, if the camera unit 200 is in a stopped state (first orientation), the first angular velocity has a value of zero. When the camera unit 200 moves, the first IMU moves with it. When the camera unit moves, (i.e., changes to a second orientation), the first angular velocity will have a non-zero, indicating and quantifying the rotational change that has occurred, ¶0039-0040. ¶0039, ‘The first inertia measurement unit 300 is disposed on and fixed to one side of the camera unit 200. The first inertia measurement unit 300 has a first inertial coordinate system, which indicates a three-dimensional position and movement relationship with reference to the first inertia measurement unit 300. Therefore, when the camera unit 200 moves, the first inertia measurement unit 300 moves together with the camera unit 200, and when the camera unit 200 is in a stopped state, the first inertia measurement unit 300 is in a stopped state. Thus, a conversion relationship between the camera coordinate system and the first inertial coordinate system may be always constant.’ ¶0040, ‘The first inertia measurement unit 300 includes a sensor which can measure inertia including acceleration and angular velocity. The first inertia measurement unit 300 may be, for example, an Inertial Measurement Unit (IMU). Therefore, the first inertia measurement unit 300 measures and outputs first inertia including first acceleration and first angular velocity. Meanwhile, it is desirable that the camera unit 200 is in a stopped state when measuring the marker 100. If the camera unit 200 is in a stopped state, the first acceleration coincides with gravitational acceleration by the Earth's gravity, and the first angular velocity has a value of zero (0).’ Lee fails to disclose: that the image-capture device is configured to be rotatably coupled to moveable arm; However, Coiseur in the context of tracking an object within a surgical field via a camera device includes a rotational component configured to rotate the camera device, discloses, a movable arm, specifically, an image capture-device configured to be rotatably coupled to a moveable arm, (¶0012, ‘improve a field of view or reduce possible errors associated with obstruction along a line of sight between a tracking element and a sensor/camera device.’; ¶0031, ‘The camera attachment 110 or the camera device 112 may be calibrated such that the field of view 114 includes an entire surgical field or a desired portion of the surgical field. In an example, the moveable arm (e.g., the camera attachment 110) may include an anthropomorphic-like robotic arm having one or more encoders in one or more joints to determine the position of the camera device 112 relative to the rotational component 108.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the image capture device of Lee such that it includes a moveable arm such that the image capture device is configured to rotatably coupled to the moveable arm as taught by Coiseur for the advantage of improving the field of view or reduce possible errors associated with obstruction a long a sight between a tracking element and a camera, ¶0012 of Coiseur. Lee fails to disclose: outputting, based on the tracking, signals to a display device to cause the display to depict the position of the at least one tool and/or the at least one bone; tracking the at least one tool and/or the at least one bone with the image-capture device at the second orientation by adjusting tracking data, derived from the image processing of the received image data, of the at least one tool and/or at least one bone to account for the angular variation of the image-capture device. However, Abhari in the context of providing spatial information during a surgical procedure discloses the generically recited claim limitation: outputting, based on the tracking, signals to a display device to cause the display to depict the position of the at least one tool and/or the at least one bone; -Abhari teaches that the system outputs data based on tracking information to a display, cause the display to depict the position of the tool. The processor is coupled to transmit output data for display on the display (311). The processor is configured to cause the display to display a virtual representation of the navigational information overlaid on the FOV, ¶0004. Since the navigational information is based on the position of the tacked tool, the virtual representation effectively depicts the position of the tool on the display. tracking the at least one tool and/or the at least one bone with the image-capture device at the second orientation by adjusting tracking data, derived from the image processing of the received image data, of the at least one tool and/or at least one bone to account for the angular variation of the image-capture device. -The system of Abhari determines the position and orientation of a tracked tool based on tracking information, ¶Abstract, ¶0004. A visual representation of the navigational information is generated and overlaid on the optical image captured by the camera, ¶0004, ¶0094. The processor is specifically configured to update the displayed virtual representation when the field-of-view (FOV) changes, ¶0004-0005, Changes in the FOV occur when the camera changes position and orientation, ¶0084, ¶0107. This update process is referred to as maintaining spatial persistence, ¶0082. Spatial persistence is the ability to maintain spatial accuracy of a tracked tool in a certain coordinate space even when the camera’s FOV changes, ¶0082. When the FOV changes (e.g., the camera changes position and orientation), the visual indication is updated to reflect the position and orientation of the tracked tool in thew new FOV while maintain spatial accuracy, ¶0082-0083, ¶0107. -This adjustment relied on data derived from previous tracking (image processing). Abhari provides an example where the visual representation being display relates to static navigational information, meaning it is based on a selected image point that was previously stored, ¶0096. When a 3D point is selected (e.g., using a tracked pointing tool), its position and orientation are determined using the tracking system, ¶0088, ¶0090-0091. The determined position relies on processing of the image data captured by the tracking camera, ¶0050, ¶0057-0058. This selected point is then stored in memory, ¶0091. When the camera changes orientation the process performed calculations to update the visual representations in accordance with the changed FOV, ¶0109. The processor does not need to recalculate the 3D position because the physical position of the selected points remains unchanged, ¶0109. Therefore, the system of Abhari tracks the camera’s orientation change, and in response, adjust the visual representation (projected onto the 2D image, using the stored 3D position (which was derived from the initial, previous image processing of the tool at the time of selection) to account for the change in camera orientation, ¶0109. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method and device of Abhari to include a display device for outputting, based on the tracking, signals to the display device to cause the display to depict the position of the at least one tool as taught by Abhari. The motivation to do this yields predictable results such as assisting the surgeon during the procedure and provide assistance for medical navigation, as suggested by Abhari, ¶0047. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the tracking of Lee to include adjusting tracking data at the second orientation, derived from the image processing of the received image data, of the at least one tool to account for the angular variation of the image-capture device as taught by Abhari. The motivation to do this yields predictable results such as improving the spatial persistence to store and maintain spatial accuracy of a tracked tool in a certain coordinate space even as the field of view of the camera changes, as suggested by Abharai ¶0082. In regards to the feature of, “wherein during the surgical procedure on the patient” – line 17, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method and device of modified Falco to be configured for during a surgical procedure on the patient since if the prior art structure is capable of performing the intended use, then it meets the claim limitations. Specifically, a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. In this case, modified Falco above meets the claim at least because the modified structure comprises the claimed structural features. See MPEP 2111.02. Claim 5: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose, wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the second orientation includes factoring in an adjustment of a point of view of the image-capture device. However, Abhari is relied upon above discloses, wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the second orientation includes factoring in an adjustment of a point of view of the image-capture device. -Abhari explicitly discloses, tracking the tool with the camera at the changed orientation includes factoring an adjustment of the camera’s point of view (i.e., field-of-view or FOV), ¶0004, ¶0005. Facoring in the adjustment includes when the FOV changes the processor is configured to update the displayed virtual representation (navigation information overlaid on the image) to follow the changed FOV, ¶0004, ¶0107-0109. This update reflects the tracked tool’s position and orientation in the new FOV thus maintaining spatial accuracy, ¶0082. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the tracking of modified Lee to include wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the second orientation includes factoring in an adjustment of a point of view of the image-capture device as taught by Abhari. The motivation to do this yields predictable results such as improving the spatial persistence to store and maintain spatial accuracy of a tracked tool in a certain coordinate space even as the field of view of the camera changes, as suggested by Abharai ¶0082. Claim 9: Lee as modified discloses all the elements above in claim 1, Lee discloses, wherein detecting the change in the orientation of the one image-capture device from the first orientation includes detecting the change in the orientation when the one image-capture device is coupled to a stationary structure. Lee discloses, The camera unit 200 is part of the optical tracker, has a first IMU 300 fixed to it, FIG. 1. This IMU 300 is configured to measured and output first inertia data, ¶Abstract, Claim 1. The first inertia data comprises first angular velocity, ¶Abstract, ¶0010. Angular velocity is the measurement of rotational speed, which directly relates to change in orientation. Hence, the monitoring of the first angular velocity outputted by the first IMU 300, the system can detect if the camera unit 200 has rotated or changed its orientation (i.e., from a first orientation to a second orientation). -The system of Lee includes a camera unit and a tracking processing unit, ¶Abstract. The core elements of tracking are through image processing. The camera unit photographs a maker fixed to the surgical instrument and outs a marker image, ¶Abstract, Claim 1, ¶0010, ¶0038. -The overall tracking system is designed to continuously track the measurement object regardless of movement of an optical tracker, ¶0007-0009. The first inertia measurement unit 300 is fixed to the camera unit (200), ¶0010, ¶0039, Claim 1. This unit measures and outputs first inertia data comprising first accelerated velocity and first angular velocity, ¶Abstract, ¶0010, Claim 1. The measurement of angular velocity suggest that rotation is possible and measured, ¶0040. The camera unit moves, the first IMU along with it, ¶0039. The camera unit is in a stopped state, the first angular velocity has a value of zero, ¶0040. This contrast indicates that if the camera unit is moving (i.e., not stopped), the angular velocity will be non-zero, meaning it is configured to be rotatable. In other words, angular velocity is the measurement of rotational speed. The camera unit 200 is attached to disposed on a separate holding means, ¶0038, ¶0075, (i.e., an arm and a stationary structure), so that the camera unit can easily photograph the measurement object. Claim 10: Lee as modified discloses all the elements above in claim 1, Lee discloses, wherein detecting the change in the orientation includes continuously monitoring the orientation of the image-capture device. -The overall tracking system is designed to continuously track the measurement object regardless of movement of an optical tracker, ¶0007-0009. The first inertia measurement unit 300 is fixed to the camera unit (200), ¶0010, ¶0039, -Specifically, Lee discloses, the camera unit’s movement and orientation by a first IMU fixed to the camera unit, which measures and outputs first inertia (i.e., first acceleration and first angular velocity), ¶0010, ¶0039. Angular velocity specifically related to the change in the orientation, ¶0040. The overall tracking method of Lee includes measuring the first inertia (300), may be successively performed in real time, ¶0056. Thus, the tracking system is designed to continuously track the position and orientation of the measurement object regardless of the movement of the optical tracker (camera unit), ¶0007-0009, ¶0073. To achieve continuous tracking despite camera movement, the camera own inertial data must be continuously available to the tracking processing unit (500), ¶0010, ¶0073. Claim 11: Lee as modified discloses all the elements above in claim 1, Lee discloses, including outputting the tracking of the at least one tool and/or of the at least one bone. -Lee discloses, outputting tracking information of the measurement object (i.e., tool) by the tracking processing unit (500), ¶Abstract, ¶0010, ¶0033. Note extracting this data is fundamentally the output of the tracking result even in cases where the claim doesn’t further define outputting. Claims 2-4, 6-8, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 1, in further view of Falco (US 2007/0034731 A1). Claim 2: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose, wherein the computer-readable program instructions executable by the processing unit further include automating a return of the image-capture device from the second orientation to the first orientation. However, Falco in the context of non-fixed camera tracking systems discloses, wherein the computer-readable program instructions executable by the processing unit further include automating a return of the image-capture device from the second orientation to the first orientation. -Flaco discloses, beginning with the Title - “Systems and methods for detecting drifts in calibrated tracking systems” and Abstract - “A method and system for detecting drift in calibrated tracking systems used to locate features with respect to one or more coordinate systems allows medical devices to be accurately tracked within a reference coordinate system, and facilitates detection and compensation for changes in the orientation of the tracking system with respect to the coordinate system over time.”, teaches a tracking system that is not fixed. Falco disclose the tracking system 30 is mounted by a bracket to the ceiling of room 20, ¶0037, ‘The tracking system 30 is mounted by a bracket 80 to the ceiling of the room 20. A motion-detector 90 monitors at least one, but preferably all, translational or rotational components of the location and orientation of the tracking system(s) 30’. The apparatus and method described by Flaco core function is designed to detect drift in calibration tracking system, including changes in the orientation of the tracking system over time, ¶Abstract. Hence, such a system would not be merely “fixed” if measuring drift/displacement of the tracking system is a concern. The tracking system can comprise at least one of an optical camera, an infrared camera, or a laser-based surface scanning device, ¶0013, ‘The tracking system can include, for example, one or more of an optical camera, a magnetic tracker, an infrared camera, a radiofrequency based tracker, and a laser-based surface scanning device.’, see also Claim 14. -Falco discloses, the tracking system (image-capture device) is actively returned to its original position and/or orientation if a movement is detected by a motion detector attached thereto, ¶0035. (¶0035, ‘servo-motors, or other appropriate devices, may be incorporated into the tracking system to actively return the tracking system to its original position and/or orientation if a movement is detected by a motion detector. As a result, the relationship between the tracking system coordinates and the room coordinates may be maintained. This may be achieved automatically by the tracking system controls, or be carried out in response to a user input.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the computer-readable program of modified Lee such that it includes automating a return of the image-capture device from the second orientation to the first orientation as taught by Flaco. The motivation to do this yields predictable results such as improving the relationship between the tracking system coordinates and them room coordinates, as suggested by ¶0035 of Flaco. Claim 3: Lee as modified discloses all the elements above in claim 2, Lee fails to disclose discloses, wherein automating the return of the image-capture device to the first orientation includes actuating a movement of the image-capture device the one or more degrees of freedom comprises one or two rotational degrees of freedom. However, Falco is relied upon above discloses, wherein automating the return of the image-capture device to the first orientation includes actuating a movement of the image-capture device in the one or more degrees of freedom comprises one or two rotational degrees of freedom. -Flaco discloses, (¶0017, ‘the alignment error can consist of at least one directional component of translation and/or rotation. The method can include the step of adjusting the position and orientation of the tracking system to correct alignment errors. The adjusting step can be carried out automatically in response to the detection of an alignment error, or be carried out manually by a user. The method can also include the step of disabling the object (e.g. medical device) being tracked by the tracking system upon the detection of an alignment error.’; ¶0034, ‘In order to maintain an accurate calibration of the tracking system over time and in accordance with various embodiments of the invention, the location and orientation of the tracking system may be monitored and, if necessary, adjusted and/or compensated for.’; ¶0035, ‘servo-motors, or other appropriate devices, may be incorporated into the tracking system to actively return the tracking system to its original position and/or orientation if a movement is detected by a motion detector. As a result, the relationship between the tracking system coordinates and the room coordinates may be maintained. This may be achieved automatically by the tracking system controls, or be carried out in response to a user input.’) -Flaco further emphasizes the tracking system calculates position and orientation in 6 DOF (i.e., three translational and three rotational), ¶0004. The system uses servo-motors or mechanical adjustments means to realign the tracking sensor if any disturbance in its position and/or orientation is detected, ¶0039, Hence, the system of Falco teaches that the automated return (actuation) aims to correct alignment errors involving both position and orientation, ¶0035-0036, which includes rotation, ¶0017, Claims 23-26. Since position and orientation collectively make up 6 DOF, the adjustment mechanism for correcting movement across any combination of these degrees of freedom would contribute to the misalignment effectively teaching actuating movement of the image capture device in at least one degrees of freedom comprising at least one degree of freedom. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the automated return the image-capture device of modified Lee such that automating the return of the image-capture device to the first orientation includes actuating a movement of the image-capture device in one or two rotational degrees of freedom as taught by Falco. The motivation to do this yields predictable results such as to accurately locate and record the location of the medical device, as suggested by ¶0023 of Flaco. Claim 4: Lee as modified discloses all the elements above in claim 2, Lee fails to disclose, wherein: the image-capture device includes one or more actuators; and the automating the return of the image-capture device to the first orientation is performed automatically after the detecting and the quantifying steps. However, Coiseur is relied upon above discloses, the image-capture device includes one or more actuators (rotational component 108): (¶0012, ‘improve a field of view or reduce possible errors associated with obstruction along a line of sight between a tracking element and a sensor/camera device.’; ¶0031, ‘The camera attachment 110 or the camera device 112 may be calibrated such that the field of view 114 includes an entire surgical field or a desired portion of the surgical field. In an example, the moveable arm (e.g., the camera attachment 110) may include an anthropomorphic-like robotic arm having one or more encoders in one or more joints to determine the position of the camera device 112 relative to the rotational component 108.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the image capture device of modified Lee such that it includes one or more actuators as taught by Coiseur for the advantage of improving the field of view or reduce possible errors associated with obstruction a long a sight between a tracking element and a camera, ¶0012 of Coiseur. Lee fails to disclose: and the automating the return of the image-capture device to the first orientation is performed, via the one or more actuators, automatically after the detecting and the quantifying steps. However, Falco is relied upon above discloses, wherein: the image-capture device includes one or more actuators; Flaco describes the components designed for automatic correction of alignment errors. Such components are servo-motors, ¶0035, ‘servo-motors, or other appropriate devices, may be incorporated into the tracking system to actively return the tracking system to its original position and/or orientation if a movement is detected by a motion detector. As a result, the relationship between the tracking system coordinates and the room coordinates may be maintained. This may be achieved automatically by the tracking system controls, or be carried out in response to a user input.’. The servo-motors collectively refer to as actuators. Flaco teaches: and the automating the return of the image-capture device to the first orientation is performed, via the one or more actuators, automatically after the detecting and the quantifying steps. -Falco describes “quantifying” referring to the process of measuring and determining the magnitude and direction of the alignment error between the tracking system’s current position and orientation and its originally calibrated, fixed position and orientation. Specifically, the system detects a change in the tracking system’s orientation and/or position from its initial calibrated state (i.e., first orientation) to a misaligned state (the second orientation). This change is described as an “alignment error” or a displacement. Quantifying this misalignment means determining the extent of both the transitional displacement and the angular variation (rotation) about one or more axes (DOF) that has occurred, ¶0017, ¶0029, & ¶0039. Accordingly, Falco teaches the detecting and quantifying the change in the orientation of the image-capture device from the first orientation to the second orientation, the change in the orientation quantified by an angular variation of the image-capture device about one or more degrees of freedom, ¶0017, Claim 23-26, ¶0039. Accordingly, the return of the image capture device to the first orientation is performed automatically after the detecting and quantifying steps, because the system of Flaco relies on the detecting and determining of the alignment error to automate the return to its originally calibrated, position and orientation. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the image-capture device of modified Lee such that it comprises the automating the return of the image-capture device to the first orientation is performed, via the one or more actuators, automatically after the detecting and the quantifying steps as taught by Flaco. The motivation to do this yields predictable results such as improving the relationship between the tracking system coordinates and the room coordinates, as suggested by ¶0035 of Flaco. Claim 6: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose, including alerting a user of the detecting of the change in the orientation of the image-capture device. However, Falco in the context of non-fixed camera tracking systems discloses, including alerting a user of the detecting of the change in the orientation of the image-capture device. -Flaco discloses, beginning with the Title - “Systems and methods for detecting drifts in calibrated tracking systems” and Abstract - “A method and system for detecting drift in calibrated tracking systems used to locate features with respect to one or more coordinate systems allows medical devices to be accurately tracked within a reference coordinate system, and facilitates detection and compensation for changes in the orientation of the tracking system with respect to the coordinate system over time.”, teaches a tracking system that is not fixed. Falco disclose the tracking system 30 is mounted by a bracket to the ceiling of room 20, ¶0037, ‘The tracking system 30 is mounted by a bracket 80 to the ceiling of the room 20. A motion-detector 90 monitors at least one, but preferably all, translational or rotational components of the location and orientation of the tracking system(s) 30’. The apparatus and method described by Flaco core function is designed to detect drift in calibration tracking system, including changes in the orientation of the tracking system over time, ¶Abstract. Hence, such a system would not be merely “fixed” if measuring drift/displacement of the tracking system is a concern. The tracking system can comprise at least one of an optical camera, an infrared camera, or a laser-based surface scanning device, ¶0013, ‘The tracking system can include, for example, one or more of an optical camera, a magnetic tracker, an infrared camera, a radiofrequency based tracker, and a laser-based surface scanning device.’, see also Claim 14. Flaco explicitly discloses, ¶0006, ‘The invention may be configured to alert users to such misalignments, to disable related tracked objects such as treatment or imaging devices until such errors are corrected, and in some cases to automatically apply corrective actions by moving the tracking system or adjusting the treatment position or medical image accordingly.’; ¶0039, ‘the tracking system 30 can be connected to a number of servo-motors, or other electrical or mechanical adjustment means (not shown), to realign the tracking sensor 70 if any disturbance in its position or orientation is detected. A signal may also be sent to a user of the portable device 40, either through a warning message being displayed on a display unit or by an audible warning signal being broadcast, if a misalignment has been detected. In some cases, the motion detector 90 can only detect a limited number of degrees of freedom of position and rotation. For example, an accelerometer can measure two out of three degrees of rotation, and can not measure translational motion. Such partial motion information is useful to alert the user to of most probable misalignments, but is not sufficient on its own to correct for misalignments in all 6 DOF […] multiple motion detectors 90, each with different degrees of freedom detection capabilities, can be combined to capture all possible types of tracking system 30 misalignment and fully correct for such misalignment.’ It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the system and method of modified Lee to include alerting a user of the detecting of the change in the orientation of the image-capture device as taught by Flaco. The motivation to do this yields predictable results such as to disable related tracked object such as treatment until such errors are corrected, as suggested by ¶0006 of Falco. Claim 7: Lee as modified discloses all the elements above in claim 6, Lee fails to disclose: including requiring the user to quantify and/or validate the change in the orientation. However, Falco is relied upon above discloses, including requiring the user to quantify and/or validate the change in the orientation. (¶0035, ‘servo-motors, or other appropriate devices, may be incorporated into the tracking system to actively return the tracking system to its original position and/or orientation if a movement is detected by a motion detector. As a result, the relationship between the tracking system coordinates and the room coordinates may be maintained. This may be achieved automatically by the tracking system controls, or be carried out in response to a user input.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the system and method of modified Lee to requiring the user to quantify and/or validate the change as taught by Flaco. The motivation to do this yields predictable results such as to disable related tracked object such as treatment until such errors are corrected, as suggested by ¶0006 of Falco. Claim 8: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose, including pausing the tracking of the at least one tool and/or the at least one bone between the detecting and the qualifying steps, and resuming the tracking after the quantifying. However, Falco in the context of non-fixed camera tracking systems discloses, including pausing the tracking of the at least one tool between the detecting and the qualifying steps, and resuming the tracking after the quantifying. -Flaco discloses, beginning with the Title - “Systems and methods for detecting drifts in calibrated tracking systems” and Abstract - “A method and system for detecting drift in calibrated tracking systems used to locate features with respect to one or more coordinate systems allows medical devices to be accurately tracked within a reference coordinate system, and facilitates detection and compensation for changes in the orientation of the tracking system with respect to the coordinate system over time.”, teaches a tracking system that is not fixed. Falco disclose the tracking system 30 is mounted by a bracket to the ceiling of room 20, ¶0037, ‘The tracking system 30 is mounted by a bracket 80 to the ceiling of the room 20. A motion-detector 90 monitors at least one, but preferably all, translational or rotational components of the location and orientation of the tracking system(s) 30’. The apparatus and method described by Flaco core function is designed to detect drift in calibration tracking system, including changes in the orientation of the tracking system over time, ¶Abstract. Hence, such a system would not be merely “fixed” if measuring drift/displacement of the tracking system is a concern. The tracking system can comprise at least one of an optical camera, an infrared camera, or a laser-based surface scanning device, ¶0013, ‘The tracking system can include, for example, one or more of an optical camera, a magnetic tracker, an infrared camera, a radiofrequency based tracker, and a laser-based surface scanning device.’, see also Claim 14. -Flaco discloses: ¶0017, ‘the alignment error can consist of at least one directional component of translation and/or rotation. The method can include the step of adjusting the position and orientation of the tracking system to correct alignment errors. The adjusting step can be carried out automatically in response to the detection of an alignment error, or be carried out manually by a user. The method can also include the step of disabling the object (e.g. medical device) being tracked by the tracking system upon the detection of an alignment error.’; ¶0043, ‘the tilt detector 290 may also send instructions (via, for example, wireless or wired communications) to one or more components of the imaging or surgical apparatus 40 (e.g., to halt operations), thus preventing further treatment until the tracker 230 is re-calibrated to the room coordinate system 10.’, Claims 24-29. -Specifically, Falco disclose the motion sensing device detects alignment errors, Claim 22-23. Upon the detection of an alignment error, the object (such as treatment device or surgical device) being tracked can be disabled, ¶0017, Claims 27-29. Subsequently, the system of Flaco calculates the adjustment factor/correction based on the detected displacement, ¶0037, and the tracker is recalibrated, Claim 25. The operations are prevented from resuming until the tracker is re-calibrated, ¶0043. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the system and method of modified Lee to include pausing the tracking of the at least one tool between the detecting and the qualifying steps, and resuming the tracking after the quantifying as taught by Flaco. The motivation to do this yields predictable results such as to disable related tracked object such as treatment until such errors are corrected, as suggested by ¶0006 of Falco. Claim 25: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose, wherein a global referential scheme is defined as a function of the first orientation of the image-capture device, such that the system is configured to track at least one tool and/or at least one bone in the global reference scheme. However, Falco in the context of non-fixed camera tracking systems discloses, wherein a global referential scheme is defined as a function of the first orientation of the image-capture device, such that the system is configured to track at least one tool and/or at least one bone in the global reference scheme. -Flaco discloses, beginning with the Title - “Systems and methods for detecting drifts in calibrated tracking systems” and Abstract - “A method and system for detecting drift in calibrated tracking systems used to locate features with respect to one or more coordinate systems allows medical devices to be accurately tracked within a reference coordinate system, and facilitates detection and compensation for changes in the orientation of the tracking system with respect to the coordinate system over time.”, teaches a tracking system that is not fixed. Falco disclose the tracking system 30 is mounted by a bracket to the ceiling of room 20, ¶0037, ‘The tracking system 30 is mounted by a bracket 80 to the ceiling of the room 20. A motion-detector 90 monitors at least one, but preferably all, translational or rotational components of the location and orientation of the tracking system(s) 30’. The apparatus and method described by Flaco core function is designed to detect drift in calibration tracking system, including changes in the orientation of the tracking system over time, ¶Abstract. Hence, such a system would not be merely “fixed” if measuring drift/displacement of the tracking system is a concern. The tracking system can comprise at least one of an optical camera, an infrared camera, or a laser-based surface scanning device, ¶0013, ‘The tracking system can include, for example, one or more of an optical camera, a magnetic tracker, an infrared camera, a radiofrequency based tracker, and a laser-based surface scanning device.’, see also Claim 14. -Specifically, Flaco discloses, the system uses a fixed reference coordinate system or a first coordinate system. This coordinate system serves as the overall reference scheme. (¶0010 ‘a method for recalibrating a tracking system to a reference coordinate system includes using a motion-system device to detect alignment errors between the tracker and the fixed coordinate system, and recalibrating (either automatically or manually) the tracker to the fixed coordinate system to reduce or eliminate the alignment errors’; ¶0023, ‘the tracking system may calculate (using standard methods such as triangulation) and record the location of the object as its position moves with respect to the tracking system and by extension moves with respect to a reference coordinate system to which the tracking system is calibrated to.’; ¶0037, ‘A reference coordinate system 10 is defined (e.g. by orthogonal room lasers in a radiotherapy treatment room or by a surgical navigational emitter array in an operating room) in a treatment room 20. A tracking system 30 is located within the room 20 to track, record, and display the position of an object 40 (e.g., surgical instrument or "free-hand" tracked ultrasound probe for 3D imaging) that may be used in the treatment of a patient in the treatment room 20. The tracked object 40 may include any one or more of the functions described above. The tracked object 40 includes a number of features such as passive or active markers 50 that can transmit or reflect a signal 60 to the tracking sensor 70 of the tracking system 30, in order to indicate the location (either continuously or periodically) of the tracked object 40 related to the reference coordinate system 10. ‘) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the coordinate system of modified Lee such that it comprises a global referential scheme is defined as a function of the first orientation of the image-capture device, such that the system is configured to track at least one tool and/or at least one bone in the global reference scheme as taught by Flaco. The motivation to do this yields predictable results such as improving the relationship between the tracking system coordinates and the room coordinates, as suggested by ¶0035 of Flaco. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 11, in further in view of Bonny (US-20200390506-A1). Claim 12: Lee as modified discloses all the elements above in claim 11, Lee fails to disclose: wherein the outputting the tracking of the at least one tool and/or of the at least one bone includes outputting the tracking graphically on a graphic-user interface. However, Bonny teaches from within a similar field of endeavor of tracked devices discloses, wherein the outputting the tracking of the at least one tool and/or of the at least one bone includes outputting the tracking graphically on a graphic-user interface. (¶0026, “a display monitor 212 having a graphical user interface (GUI) for displaying one or more steps of the surgical workflow, and two or more trackable devices (e.g., a tracked digitizer probe 230 and a tracked surgical device 204)”). It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the system and method of modified Lee to include outputing the tracking graphically on a graphic user interface as taught by Bonny. The motivation to do this yields predictable results such as aiding the user in planning the surgical procedure, ¶0043. Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 1, in further in view of Crawford et al (US 2017/0071685 A1). Claim 13: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose: wherein the tracking of the at least one tool and/or of the at least one bone includes tracking the at least one tool relative to the at least one bone. However, Crawford teaches from within a similar field of endeavor, wherein the tracking of the at least one tool and/or of the at least one bone includes tracking the at least one tool relative to the at least one bone. (¶0058 “Images captured by automated imaging system 104 may be displayed on display 34, which may allow medical personal to locate bone and organs within a patient.”; ¶0059, “Medical personnel may use imaging equipment to locate and find the spine, as detailed above. Using the images, an operator may upload the information regarding the location of the spine into automated medical system 2. Automated medical system 2 may then track, locate, and move end effector tools 26 to areas specified by the operator.” – such that the tool can be tracked relative to the location of the bone). It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify modified Lee to track at least one bone relative to a tool as taught by Crawford, for the advantage of allowing the medical personal to locate bone and organs within a patient, ¶0058 of Crawford. Claim 14: Lee as modified discloses all the elements above in claim 1, Lee does not teach, wherein the tracking of the at least one tool includes tracking the at least one tool as moved by a surgical robot, and further including blocking movement of the surgical robot when detecting the change in the orientation. However, Crawford teaches from within a similar field of endeavor, wherein the tracking of the at least one tool includes tracking the at least one tool as moved by a surgical robot, (¶0033, “End effector 22 may attach to robot arm 20 in any suitable location.” And “Dynamic reference arrays 52, herein referred to as “DRAs”, are rigid bodies which may be disposed on a patient and/or tool in a navigated surgical procedure. Their purpose may be to allow 3D localization systems to track the positions of tracking markers that are embedded in the DRA 52, and thereby track the real-time position of relevant anatomy.” – Where an end effector can be any tool selected for a medical procedure, ¶0033, and the camera tracking system works in conjunction with the robot, ¶0039). and further including blocking movement of the surgical robot when detecting the change in the orientation. (¶0044 “While camera 46 is obstructed from viewing DRAs 52, camera tracking system 6 may send a stop signal to robot support system 4, display 34, and/or a tablet.” And “This stoppage may prevent SCARA 24 and/or end effector 22 from moving and/or using medical tools without being tracked by automated medical system 2.”). It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify modified Lee to include a surgical robot and stop the movement of the robot if change in the orientation is detected as taught by Crawford for the advantage of preventing accidental movement during a medical procedure, ¶0050 of Crawford. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 1, in further in view of Wollowick et al (US 2020/0146753 A1, herein Wollowick ‘753). Claim 21: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose: wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the first orientation includes recording an initial orientation of the image-capture device during the surgical procedure, and setting and storing the initial orientation as a reference value. However, Wollowick ‘753 in the context of intra-operative imaging acquisition and calibration discloses, wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the first orientation includes recording an initial orientation of the image-capture device during the surgical procedure, and setting and storing the initial orientation as a reference value. (¶0002, ‘The invention relates to acquiring digital images of anatomical features such as bones within a patient and more particularly to accurately capturing digital images of a patient during surgery.’; Claim 3, ‘wherein to receive the real-time spatial orientation of the camera relative to the frame of reference, the processor is configured to: receive, from the user interface at a first time prior to the current time, a camera calibration input; receive, from the orientation system, in response to the camera calibration input, an initial spatial orientation of the camera at the first time; and record the initial spatial orientation of the camera, wherein the frame of reference is based on the initial spatial orientation of the camera.’; ¶0042, ‘Spatial orientation data describing the camera's orientation in space at the time of image acquisition may be stored along with the image itself.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the surgical procedure of modified Lee such that wherein tracking at least one bone with the image-capture device being at a first orientation includes a recording of an initial orientation of the image-capture device during the surgical procedure, and setting and storing the initial orientation as reference value as taught by Wollowick ‘753. The motivation to do this yields predictable results such as improving spatial alignment guidance to improve navigation of the camera device during surgery. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 1, in further in view of Wollowick et al (US 2017/0333134 A1, herein Wollowick ‘134). Claim 22: Lee as modified discloses all the elements above in claim 1, Lee fails to disclose: wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the first orientation includes recording an orientation of the image-capture device during the surgical procedure after a manual adjustment of position and/or orientation of the image-capture device relative to an operative scene. However, Wollowick ‘134 in the context of intra-operative imaging acquisition and calibration discloses, wherein the tracking of the at least one tool and/or of the at least one bone with the image-capture device at the first orientation includes recording an orientation of the image-capture device during the surgical procedure after a manual adjustment of position and/or orientation of the image-capture device relative to an operative scene. (Abstract, ‘A system and method utilizing a camera device containing an … orientation mechanism to calibrate alignment of the camera device according to a primary image such as displayed on an imaging screen, and then utilizing this calibration data to guide the system or user in acquiring a spatially aligned digital image using the camera device. Steps include recording the spatial orientation of the camera in at least two spatial dimensions when aligned with the primary image, and guiding camera alignment relative to the imaging screen when taking a picture.’; ¶0002, ‘The invention relates to acquiring digital images of anatomical features such as bones within a patient and more particularly to accurately capturing digital images of a patient during surgery.’; ¶0013, ‘The invention may be expressed as a method for capturing a spatially aligned digital image of a primary image of anatomy of a patient, by selecting a primary image suitable for at least one of intraoperative surgical guidance and preoperative surgical planning. A computing device and a digital camera are selected, and the digital camera is aligned with the primary image. The method further includes determining and recording the spatial orientation of the digital camera in at least two axes, and communicating the spatial orientation to the computing device.’; Claim 7, ‘A system capable of capturing a spatially aligned digital image from a primary image of anatomy of a patient, comprising: a digital camera; an orientation mechanism capable of determining the spatial orientation of the digital camera in at least two axes; a computing device in communication with the digital camera and the orientation mechanism, the computing device having a user interface and including: a device calibration module to guide a user to align the digital camera with the primary image; a calibrated guidance module capable of interfacing with the orientation mechanism to determine and record the spatial orientation of the digital camera in at least two axes, and communicating the spatial orientation to the computing device; and an image acquisition module to guide the user to move the digital camera away from the primary image until an acceptable field of view of the primary image is achieved, and obtain a digital image of the primary image after confirming that the digital camera is acceptably aligned spatially with the primary image, wherein the spatially aligned digital image is suitable for at least one of intraoperative surgical guidance and preoperative surgical planning.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the surgical procedure of modified Lee such that wherein tracking at least at least one bone with the image-capture device being at a first orientation includes recording of an orientation of the image-capture device during the surgical procedure after a manual adjustment of position and/or orientation of the image-capture device relative to an operative scene as taught by Wollowick ‘134. The motivation to do this yields predictable results such as improving spatial alignment guidance to improve navigation of the camera device during surgery. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 1, in further in view of Barnett et al (US 2018/0338801 A1). Claim 23: Lee as modified discloses all the elements above in claim 1, Lee does not teach, wherein, during the tracking of the at least one tool and/or of the at least one bone with image-capture device at the first orientation, the system verifies the orientation of the image-capture device without pause. However, Barnett discloses, wherein, during the tracking of the at least one tool and/or of the at least one bone with image-capture device at the first orientation, the system verifies the orientation of the image-capture device without pause. (¶0068, ‘a camera for monitoring a plurality of physical regions and continuously tracking movement or detecting position of a plurality of medical instruments throughout a medical procedure as at least some of the plurality of medical instruments are moved between physical regions at least including a first pre-procedure region, a second procedure region and third post-procedure region; a computer connected to the camera and collecting images from the camera; and a display in electrical communication with the computer and/or camera for displaying medical instrument data pertaining to the movement or position of the plurality of medical instruments as well as images of the physical regions.’; [0038], ‘the system or apparatus will preferably continuously track each instrument/item and display on the display at least two of the x, y and/or z coordinates for the instrument/item, as well as specific data relating to both the spatial movement and position of the instrument/item’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the system of modified Lee such that system continuously verifies the orientation of the image capture device without pause as taught by Barnett. The motivation to do this yields predictable results such as improving medical procedures and medical procedure management, and, more particularly, to systems, methods and apparatus for tracking medical instruments throughout a procedure and assisting medical personnel throughout the procedure or at least at the conclusion of the procedure, as suggested by Barnett [0009]. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Abhari et al (US 2019/0015162 A1), as applied to claim 1, in further in view of Calloway et al (US 2021/0169578 A1). Claim 24: Lee as modified discloses all the elements above in claim 1, Lee does not teach, wherein the system is configured to perform the detecting, quantifying, and tracking steps as part of a surgical workflow associated with the surgical procedure, and to perform the detecting and the quantifying steps while physical aspects of the surgical workflow are halted. However, Calloway in the context of surgical navigation of camera pose discloses, wherein the system is configured to perform the detecting, quantifying, and tracking steps as part of a surgical workflow associated with the surgical procedure, and to perform the detecting and the quantifying steps while physical aspects of the surgical workflow are halted. (¶0055 ‘Referring to FIG. 6, the navigation camera 46 has a navigation field-of-view 600 in which the pose (e.g., position and orientation) of the reference array 602 attached to the patient, the reference array 604 attached to the surgical instrument, and the robot arm 20 are tracked. The navigation camera 46 may be part of the camera tracking system 6′ of FIGS. 3B and 3C, which includes the computer platform 910 configured to perform the operations described below. The reference arrays enable tracking by reflecting light in known patterns, which are decoded to determine their respective poses by the tracking subsystem of the surgical robot 4. If the line-of-sight between the patient reference array 602 and the navigation camera 46 is blocked (for example, by a medical personnel, instrument, etc.), further navigation of the surgical instrument may not be able to be performed and a responsive notification may temporarily halt further movement of the robot arm 20 and surgical robot 4, display a warning on the display 34, and/or provide an audible warning to medical personnel. The display 34 is accessible to the surgeon 610 and assistant 612 but viewing requires a head to be turned away from the patient and for eye focus to be changed to a different distance and location. The navigation software may be controlled by a tech personnel 614 based on vocal instructions from the surgeon.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the surgical procedure of modified Lee such that the steps wherein the steps are performed in a surgical workflow associated with the surgical procedure, the surgical workflow being halted during the detecting and the quantifying as taught by Calloway. The motivation to do this yields predictable results such as preventing unwanted movement or injury to the patient during the surgical procedure. Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al (US 2017/0224425 A1) in view of Coiseur (US 2019/0199915 A1) in view of Falco (US 2007/0034731 A1). Claim 26: Lee discloses, A system for tracking at least one tool and/or at least one bone in real-time during computer-assisted surgery, comprising: (¶Abstract, ¶0001, FIG. 1, Claim 1) a tracking system having an image capture device configured to be rotatably coupled to a arm; (camera unit 300 -the camera unit 300 may be, for example, a stereo camera, ¶0038) -The system of Lee includes a camera unit and a tracking processing unit, ¶Abstract. The core elements of tracking are through image processing. The camera unit photographs a maker fixed to the surgical instrument and outs a marker image, ¶Abstract, Claim 1, ¶0010, ¶0038. -The overall tracking system is designed to continuously track the measurement object regardless of movement of an optical tracker, ¶0007-0009. The first inertia measurement unit 300 is fixed to the camera unit (200), ¶0010, ¶0039, Claim 1. This unit measures and outputs first inertia data comprising first accelerated velocity and first angular velocity, ¶Abstract, ¶0010, Claim 1. The measurement of angular velocity suggest that rotation is possible and measured, ¶0040. The camera unit moves, the first IMU along with it, ¶0039. The camera unit is in a stopped state, the first angular velocity has a value of zero, ¶0040. This contrast indicates that if the camera unit is moving (i.e., not stopped), the angular velocity will be non-zero, meaning it is configured to be rotatable. In other words, angular velocity is the measurement of rotational speed. The camera unit 200 is attached to disposed on a separate holding means, ¶0038, ¶0075, (i.e., an arm), so that the camera unit can easily photograph the measurement object. a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: -The tracking processing unit 500 a non-transitory computer readable medium that performs complex, real-time computational functions, ¶0044-0046, such as imaging processing, ¶0045-0051, data analysis and complex coordinate system conversions. The tracking processing unit 500 is configured to primarily extract the position and posture of the surgical instrument using the marker image by imaging processing, ¶Abstract, ¶0010, Claim 5, Claim 1-‘a tracking processing unit configured to primarily extract a position and posture of the measurement object using the marker image, and secondarily extract the position and posture of the measurement object using the first inertia and the second inertia.’. This means the process involves the tracking processing unit analyzing the marker image by a function configured to receive image data obtained with the image-capture device. the tracking of the at least one tool and/or of the at least one bone in real-time with the tracking system performing image processing, with the image capture device being at a first orientation during a surgical procedure on a patient, and -Lee explicitly teaches tracking a surgical instrument (tool) also referred to as a measurement object using image processing performed on image data received from a camera unit, ¶Abstract, ¶0033-0035. Camera unit photographs a marker fixed to the measurement object and outputs a marker image, ¶Abstract, Claim 1, ¶0010, ¶0038, Claim 5. The camera unit may be, for example, a stereo camera, ¶0038. -To calculate the position and posture of the measurement object, the tracking processing unit analyzes the positions, sizes, etc, of position points of the marker from the marker image, ¶0044-0045, ¶0050-0051. -The system requires the ability to accurately track the position and posture of the surgical instrument in real time, ¶0002. The tracking method steps, including photographing the marker, and extracting the position and posture are also successfully performed in real time, ¶0056-0058. ¶0056, ‘Further, all of S100, S200, S300, S400, and S500 may be successively performed in real time [emphasis added]’. -The system requires the ability to accurately track the position and posture of the surgical instrument in real time, ¶0002. The tracking method steps, including photographing the marker, and extracting the position and posture are also successfully performed in real time, ¶0056-0058. ¶0056, ‘Further, all of S100, S200, S300, S400, and S500 may be successively performed in real time [emphasis added]’. -The camera unit’s orientation, referred to as the first orientation is addressed through the concept of the camera being at a stopped state, which is the desired state for image based tracking, ¶0039-0040. storing the first orientation of the image capture device; -Implicitly taught based on the architecture and methods taught by Lee. Lee teaches ¶0070, ‘since the first inertia measurement unit 300 is fixed to the camera unit 200 and the second inertia measurement unit 400 is fixed to the marker 100 or the measurement object, each of a conversion determinant according to the first conversion process and a conversion determinant according to the third conversion process may have a constant value. Therefore, the tracking processing unit 500 may perform a coordinate conversion by using a given initial value, as it is, without performing a calculation for obtaining the conversion determinants according to the first and third conversion processes.’ detecting a change in orientation of the image capture device from the first orientation to a second orientation, -The camera unit 200 is part of the optical tracker, has a first IMU 300 fixed to it, FIG. 1. This IMU 300 is configured to measured and output first inertia data, ¶Abstract, Claim 1. The first inertia data comprises first angular velocity, ¶Abstract, ¶0010. Angular velocity is the measurement of rotational speed, which directly relates to change in orientation. Hence, the monitoring of the first angular velocity outputted by the first IMU 300, the system can detect if the camera unit 200 has rotated or changed its orientation (i.e., from a first orientation to a second orientation). quantifying the change in the orientation of the image capture device from the first orientation to the second orientation, the change in the orientation quantified by an angular variation of the image capture device about one or more degrees of freedom, and -Lee teaches quantifying the change in the orientation of the camera unit (i.e., image-capture device), ¶Abstract. This quantification is implicitly done through the measurement of angular velocity about degree of freedom, which is a function of inertial measurements units. -The camera unit 200 is part of the optical tracker, has a first IMU 300 fixed to it, FIG. 1. This IMU 300 is configured to measured and output first inertia data, ¶Abstract, Claim 1. The first inertia data comprises first angular velocity, ¶Abstract, ¶0010. Angular velocity is the measurement of rotational speed, which directly relates to change in the orientation. Hence, the monitoring of the first angular velocity outputted by the first IMU 300, the system can detect if the camera unit 200 has rotated or changed its orientation (i.e., from a first orientation to a second orientation). -Angular velocity is the rate of change of the angle or orientation of the camera, ¶0046. Therefore, the first angular velocity directly quantifies the rotational movement of the camera unit 200 about one or more degrees of freedom. -Lee establishes the measurement and conditions under which a non-zero angular velocity is recorded, indicating that rotational movement has occurred. The core design of Lee is directed to detecting changes in movement of the camera for re-correction in situations of unintentional change of the camera unit, during a surgical operation, ¶0006. Specifically, if the camera unit 200 is in a stopped state (first orientation), the first angular velocity has a value of zero. When the camera unit 200 moves, the first IMU moves with it. When the camera unit moves, (i.e., changes to a second orientation), the first angular velocity will have a non-zero, indicating and quantifying the rotational change that has occurred, ¶0039-0040. ¶0039, ‘The first inertia measurement unit 300 is disposed on and fixed to one side of the camera unit 200. The first inertia measurement unit 300 has a first inertial coordinate system, which indicates a three-dimensional position and movement relationship with reference to the first inertia measurement unit 300. Therefore, when the camera unit 200 moves, the first inertia measurement unit 300 moves together with the camera unit 200, and when the camera unit 200 is in a stopped state, the first inertia measurement unit 300 is in a stopped state. Thus, a conversion relationship between the camera coordinate system and the first inertial coordinate system may be always constant.’ ¶0040, ‘The first inertia measurement unit 300 includes a sensor which can measure inertia including acceleration and angular velocity. The first inertia measurement unit 300 may be, for example, an Inertial Measurement Unit (IMU). Therefore, the first inertia measurement unit 300 measures and outputs first inertia including first acceleration and first angular velocity. Meanwhile, it is desirable that the camera unit 200 is in a stopped state when measuring the marker 100. If the camera unit 200 is in a stopped state, the first acceleration coincides with gravitational acceleration by the Earth's gravity, and the first angular velocity has a value of zero (0).’ continuing to track the at least one tool with the image capture device at the first orientation. -The overall tracking system is designed to continuously track the measurement object regardless of movement of an optical tracker, ¶0007-0009. The first inertia measurement unit 300 is fixed to the camera unit (200), ¶0010, ¶0039, -Specifically, Lee discloses, the camera unit’s movement and orientation by a first IMU fixed to the camera unit, which measures and outputs first inertia (i.e., first acceleration and first angular velocity), ¶0010, ¶0039. Angular velocity specifically related to the change in the orientation, ¶0040. The overall tracking method of Lee includes measuring the first inertia (300), may be successively performed in real time, ¶0056. Thus, the tracking system is designed to continuously track the position and orientation of the measurement object regardless of the movement of the optical tracker (camera unit), ¶0007-0009, ¶0073. To achieve continuous tracking despite camera movement, the camera own inertial data must be continuously available to the tracking processing unit (500), ¶0010, ¶0073. Lee fails to disclose: that the image-capture device is configured to be rotatably coupled to moveable arm via one or more actuators; However, Coiseur in the context of tracking an object within a surgical field via a camera device includes a rotational component configured to rotate the camera device, discloses, : the image-capture device configured to be rotatably coupled to a movable arm via one or more actuators (rotational component 108): (¶0012, ‘improve a field of view or reduce possible errors associated with obstruction along a line of sight between a tracking element and a sensor/camera device.’; ¶0031, ‘The camera attachment 110 or the camera device 112 may be calibrated such that the field of view 114 includes an entire surgical field or a desired portion of the surgical field. In an example, the moveable arm (e.g., the camera attachment 110) may include an anthropomorphic-like robotic arm having one or more encoders in one or more joints to determine the position of the camera device 112 relative to the rotational component 108.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the image capture device of Leesuch that it includes a moveable arm such that the image capture device is configured to rotatably coupled to the moveable arm via one or more actuators as taught by Coiseur for the advantage of improving the field of view or reduce possible errors associated with obstruction a long a sight between a tracking element and a camera, ¶0012 of Coiseur. Lee in view of Coiseur fails to disclose: the tracking of the at least one tool and/or of the at least one bone with the image capture device at the second orientation by adjusting tracking data of the at least one tool and/or the at least one bone to account for the angular variation of the image capture device; based on the detected change in orientation and the adjusted tracking data, moving, via the one or more actuators, the image capture device from the second orientation to the first orientation; and continuing to track the at least one tool and/or of the at least one bone with the image capture device at the first orientation. However, Falco in the context of non-fixed camera tracking systems discloses, beginning with the Title - “Systems and methods for detecting drifts in calibrated tracking systems” and Abstract - “A method and system for detecting drift in calibrated tracking systems used to locate features with respect to one or more coordinate systems allows medical devices to be accurately tracked within a reference coordinate system, and facilitates detection and compensation for changes in the orientation of the tracking system with respect to the coordinate system over time.”, teaches a tracking system that is not fixed. Falco disclose the tracking system 30 is mounted by a bracket to the ceiling of room 20, ¶0037, ‘The tracking system 30 is mounted by a bracket 80 to the ceiling of the room 20. A motion-detector 90 monitors at least one, but preferably all, translational or rotational components of the location and orientation of the tracking system(s) 30’. The apparatus and method described by Flaco core function is designed to detect drift in calibration tracking system, including changes in the orientation of the tracking system over time, ¶Abstract. Hence, such a system would not be merely “fixed” if measuring drift/displacement of the tracking system is a concern. The tracking system can comprise at least one of an optical camera, an infrared camera, or a laser-based surface scanning device, ¶0013, ‘The tracking system can include, for example, one or more of an optical camera, a magnetic tracker, an infrared camera, a radiofrequency based tracker, and a laser-based surface scanning device.’, see also Claim 14. the tracking of the at least one tool with the image capture device at the second orientation by adjusting tracking data of the at least one tool to account for the angular variation of the image capture device; -Flaco discloses, the tracking system (which includes an optical camera) tracks the object (i.e., the tool), ¶0004, ¶0013, Claim 10-14. When the motion detect detects a displacement (change in orientation/angular variation), ¶0011, ¶0039, the system of Falco can compensate for this movement, ¶0006, ¶0035. The processing device calculates an adjustment factor (i.e., correction factor) based on the detected displacement, ¶0011, Claim 1. The processing device is configured to adjust the location of the object (tracking data) into a location with respect to the fixed coordinate system (i.e., a global reference scheme) in accordance with the adjustment factor, ¶0011, Claim 1. based on the detected change in orientation and the adjusted tracking data, moving, via the one or more actuators, the image capture device from the second orientation to the first orientation; and -Flaco discloses that the apparatus includes a position-adjustment device (i.e., an actuator), ¶0015, ¶0024-0025, Claim 20. Servo-motors adjustments means are also incorporated into the tracking system for this purpose, ¶0035, ¶0039. The detection of the alignment error triggers the correction, ¶0031. The processing device provides a signal to the position-adjustment device, ¶0015, Claim 19-21. The adjustment step is carried out automatically in response to the detection of an alignment error, ¶0017, ¶0031. The purpose of this movement is to actively return the tracking system to its original position and/or orientation, ¶0035. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the system and method of modified Lee such that it includes tracking of the at least one tool with the image capture device at the second orientation by adjusting tracking data of the at least one tool to account for the angular variation of the image capture device; based on the detected change in orientation and the adjusted tracking data, moving, via the one or more actuators, the image capture device from the second orientation to the first orientation as taught by Flaco. The motivation to do this yields predictable results such as improving the relationship between the tracking system coordinates and them room coordinates, as suggested by ¶0035 of Flaco. In regards to the feature of, “during the surgical procedure on the patient” – line 13, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method and device of modified Falco to be configured for during a surgical procedure on the patient since if the prior art structure is capable of performing the intended use, then it meets the claim limitations. Specifically, a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. In this case, modified Falco above meets the claim at least because the modified structure comprises the claimed structural features. See MPEP 2111.02. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Robinson whose telephone number is (571)272-9019. The examiner can normally be reached M-F 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.A.R./Examiner, Art Unit 3798 /PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

May 19, 2021
Application Filed
Mar 09, 2023
Non-Final Rejection — §101, §103, §112
Jun 09, 2023
Response Filed
Jun 30, 2023
Final Rejection — §101, §103, §112
Sep 05, 2023
Response after Non-Final Action
Sep 08, 2023
Applicant Interview (Telephonic)
Sep 08, 2023
Response after Non-Final Action
Oct 04, 2023
Request for Continued Examination
Oct 17, 2023
Response after Non-Final Action
Dec 13, 2023
Non-Final Rejection — §101, §103, §112
Mar 19, 2024
Response Filed
Jun 28, 2024
Final Rejection — §101, §103, §112
Oct 07, 2024
Request for Continued Examination
Oct 10, 2024
Response after Non-Final Action
Oct 16, 2024
Non-Final Rejection — §101, §103, §112
Feb 21, 2025
Response Filed
Jun 10, 2025
Final Rejection — §101, §103, §112
Sep 16, 2025
Request for Continued Examination
Sep 24, 2025
Response after Non-Final Action
Oct 20, 2025
Non-Final Rejection — §101, §103, §112
Feb 27, 2026
Response Filed
Mar 22, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594024
METHOD FOR PREDICTING SURVIVAL OF NON SMALL CELL LUNG CANCER PATIENTS WITH BRAIN METASTASIS
2y 5m to grant Granted Apr 07, 2026
Patent 12569219
METHODS AND SYSTEMS FOR VALVE REGURGITATION ASSESSMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12569142
Method And System For Context-Aware Photoacoustic Imaging
2y 5m to grant Granted Mar 10, 2026
Patent 12569154
PATHLENGTH RESOLVED CW-LIGHT SOURCE BASED DIFFUSE CORRELATION SPECTROSCOPY
2y 5m to grant Granted Mar 10, 2026
Patent 12564381
SYSTEMS AND METHODS FOR CONTRAST ENHANCED IMAGING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+54.9%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month