Prosecution Insights
Last updated: April 19, 2026
Application No. 18/441,495

INTELLIGENT VISUALIZATION AND AUTOMATION PLATFORM

Final Rejection §102§103
Filed
Feb 14, 2024
Examiner
TARKO, ASMAMAW G
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Orthopediatrics Corp.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
284 granted / 395 resolved
+13.9% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
24 currently pending
Career history
419
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 395 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This is communication is in response to the Applicant’s Amendment filed on 01/14/2026. Claims 1-2 were pending. Claim 1 is amended. Claim 2 is cancelled. Claims 3-15 are added. Claims 1 and 3-15 are currently pending. Objection to claim 2 is moot in view of Applicant’s cancellation of claim 2. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4-6, and 10-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Asaban et al. (US 20240377640 A1, hereinafter “Asaban”). Regarding claim 1. (currently amended) Asaban disclose a method for video recording of a surgical procedure performed on a patient, comprising: supporting a video camera from the body of a medical professional (0069; Figure 2; “[0069] To capture images of ROI 24, head-mounted unit 28 includes one or more cameras 43. … one or more cameras 43 are located in proximity to the eyes of healthcare professional 26. Camera(s) 43 are located alongside the eyes in FIG. 2; but alternatively, camera(s) 43 may be mounted elsewhere on unit 28, for example above the eyes or below the eyes. According to some aspects, only one camera 43 may be used, e.g., mounted above the eyes near a center of the head-mounted unit 28 or at another location. Camera(s) 43 may comprise any suitable type of miniature color video cameras (e.g., RGB cameras or RGB-IR cameras), including an image sensor (e.g., CMOS sensor) and objective optics (and optionally a color array filter). In accordance with several embodiments, camera(s) 43 capture respective images of a field of view (FOV) 22, which may be considerably wider in angular extent than ROI 24, and may have higher resolution than is required by displays 30.”); aiming the video camera at the surgical site (0086; Figure 4; “[0086] FIG. 4 is a schematic pictorial illustration showing a magnified image presented in portion 33 of display 30, ... The magnified image shows an incision 62 made by healthcare professional 26 in a back 60 of a patient, with an augmented-reality overlay 64 showing at least a portion of the patient's vertebrae (e.g., cervical vertebrae, thoracic vertebrae, lumbar vertebrae, and/or sacral vertebrae) and/or sacroiliac joints, in registration with the magnified image. For example, overlay 64 may include a 2D image or a 3D image or model of the region of interest (ROI) 24 magnified to the same proportion as the magnified image displayed in portion 33 (e.g., a video image). The overlay 64 may be then augmented or integrated, for example, on the digitally magnified video image and in alignment with the magnified image. Overlay 64 may be based, for example, on a medical image (e.g., obtained via computed tomography (CT), X-ray, or magnetic resonance imaging (MRI) systems) acquired prior to and/or during the surgical procedure or other interventional or diagnostic procedure (e.g., open surgical procedure or minimally invasive procedure involving self-sealing incisions, such as catheter-based intervention or laparoscopic or keyhole surgery). The overlay image may be aligned or otherwise integrated with the magnified image by using image analysis (e.g., by feature-based image registration techniques). … such alignment and/or registration may be achieved by aligning the overlay image with the underlying anatomical structure of the patient, while assuming the magnified image is substantially aligned with the patient anatomy. … one or more eye trackers (e.g., eye trackers 44) may be employed which may allow a more accurate alignment of the magnified video image with the underlying patient anatomy. The eye tracker may allow capturing the ROI and in addition a display of the image on the near-eye display in alignment with the user's line of sight and the ROI in a more accurate manner when the user is not looking straightforward.”); acquiring video data of the surgical procedure wirelessly transmitting the acquired video data to a processor having a touch screen user interface (0072 and 0123; Figures 3 and 7; “[0123] ... The systems and modules may also be transmitted as generated data signals (for example, as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (for example, as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). ...”); and re-aiming the video camera from the touch screen user interface (0072; Figure 3; “[0072] … healthcare professional 26 may adjust the FOV 22 (which includes ROI 24) by altering a view angle (e.g., vertical view angle to accommodate the specific user's height and/or head posture), and/or the magnification of the image that is presented on displays 30, for example by means of a user interface 54 of processing system 50 (optional user adjustment step 58). User interface 54 may comprise hardware elements, such as knobs, buttons, touchpad, touchscreen, mouse, and/or a joystick, as well as software-based on-screen controls (e.g., touchscreen graphical user interface elements and/or voice controls (e.g., voice-activated controls using a speech processing hardware and/or software module). …”). Regarding claim 4. (new) Asaban disclose the method of claim 1, which further comprises recognizing anatomical structure from the acquired video data at the surgical site and storing data representing the recognized anatomical structure (0086 and 0088-0089; Figure 4; “[0086] FIG. 4 is a schematic pictorial illustration showing a magnified image presented in portion 33 of display 30, … . The magnified image shows an incision 62 made by healthcare professional 26 in a back 60 of a patient, with an augmented-reality overlay 64 showing at least a portion of the patient's vertebrae (e.g., cervical vertebrae, thoracic vertebrae, lumbar vertebrae, and/or sacral vertebrae) and/or sacroiliac joints, in registration with the magnified image. … The overlay image may be aligned or otherwise integrated with the magnified image by using image analysis (e.g., by feature-based image registration techniques). … such alignment and/or registration may be achieved by aligning the overlay image with the underlying anatomical structure of the patient, while assuming the magnified image is substantially aligned with the patient anatomy. Alignment and/or registration of such an overlay with the underlying anatomical structure of a patient is described, …”). Regarding claim 5. (new) Asaban disclose the method of claim 1, which further comprises recognizing equipment from the acquired video data used at the surgical site and storing data representing the recognized equipment (0063 and 0081; Figure 3; “[0063] … the processor generates and presents a magnified stereoscopic image on the see-through display, so that the user is able to see a magnified 3D-like view of the ROI. The 3D-like view may be formed by generating a three-dimensional effect which adds an illusion of depth to the display of flat or two-dimensional (2D) images, e.g., images captured by the visible light cameras. The 3D-like view may include 2D or 3D images (e.g., pre-operative and/or intraoperative anatomical medical images), virtual trajectories, guides or icons, digital representations of surgical tools or instruments, operator instructions or alerts, and/or patient information). …”). Regarding claim 6. (new) Asaban disclose the method of claim 1, which further comprises recognizing a workflow step from the acquired video data and storing data representing the workflow step (0101-0103; Figure 6). Regarding claim 10. (new) Asaban disclose the method of claim 1, wherein said supporting is from the head of the user with a semi-rigid skeletal structure (0094; Figure 5; “[0094] FIG. 5 is a schematic pictorial illustration showing details of a head-mounted display (HMD) unit 70, according to another embodiment of the disclosure. HMD unit 70 may be worn by healthcare professional 26, and may be used in place of head-mounted unit 28 (FIG. 1). HMD unit 70 comprises an optics housing 74 which incorporates a camera 78, and in the specific embodiment shown, an infrared camera. In some embodiments, the housing 74 comprises an infrared-transparent window 75, and within the housing, e.g., behind the window, are mounted one or more, for example two, infrared projectors 76. Additionally or alternatively, housing 74 may contain one or more color video cameras 77, as in head-mounted unit 28, and may also contain eye trackers, such as eye trackers 44.”). Regarding claim 11. (new) Asaban disclose the method of claim 1, wherein said supporting is by a gimbal mounting capable of moving along two axes (0094; Figure 5; wherein the head-mounted unit moving along two axes ). Regarding claim 12. (new) Asaban disclose the method of claim 1, which further comprises placing a predetermined symbol on the patients, and adjusting the orientation of the camera using the predetermined symbol (0081 and 0088; Figures 3-4; “[0088] … An image of the ROI 24 and of a patient marker attached (e.g., fixedly attached) to the patient anatomy or skin and serving as a fiducial for the ROI 24 is captured, for example using a tracking camera such as distance sensor or tracking device 63 of head-mounted unit 28 or camera 78 of head-mounted unit 70. The relative location and orientation of the registration marker and the patient marker are predefined or determined, e.g., via the tracking device. The CT or other medical image and tracking camera image(s) may then be registered based on the registration marker and/or the patient marker. ...”). Regarding claim 13. (new) Asaban disclose the method of claim 1, which further comprises commanding by voice the setting of the focal point of the video data during said acquiring (0072; Figure 3; “[0072] … According to some aspects, healthcare professional 26 may adjust the FOV 22 (which includes ROI 24) by altering a view angle (e.g., vertical view angle to accommodate the specific user's height and/or head posture), and/or the magnification of the image that is presented on displays 30, for example by means of a user interface 54 of processing system 50 (optional user adjustment step 58). User interface 54 may comprise hardware elements, such as knobs, buttons, touchpad, touchscreen, mouse, and/or a joystick, as well as software-based on-screen controls (e.g., touchscreen graphical user interface elements and/or voice controls (e.g., voice-activated controls using a speech processing hardware and/or software module). ...”). Regarding claim 14. (new) Asaban disclose the method of claim 1, which further comprises commanding by voice the setting of the field of view of the video data during said acquiring (0072; Figure 3; “[0072] … According to some aspects, healthcare professional 26 may adjust the FOV 22 (which includes ROI 24) by altering a view angle (e.g., vertical view angle to accommodate the specific user's height and/or head posture), and/or the magnification of the image that is presented on displays 30, for example by means of a user interface 54 of processing system 50 (optional user adjustment step 58). User interface 54 may comprise hardware elements, such as knobs, buttons, touchpad, touchscreen, mouse, and/or a joystick, as well as software-based on-screen controls (e.g., touchscreen graphical user interface elements and/or voice controls (e.g., voice-activated controls using a speech processing hardware and/or software module). Additionally, or alternatively, the vertical view angle of the head-up display unit may be manually adjusted by the user (e.g., via a mechanical tilt mechanism).”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3 and 7-8 rejected under 35 U.S.C. 103 as being unpatentable over Asaban as applied to claim 1 above, and further in view of Gerstner et al. (US 20170202630 A1, hereinafter “Gerstner”). Regarding claim 3. (new) Asaban disclose the method of claim 1, but failed to disclose wherein the touch screen user interface is in a bag configured for operation in a sterile environment. Gerstner, however, in the same field of endeavor, shows the touch screen user interface is in a bag configured for operation in a sterile environment (0167; Figure 52; “[0167] FIG. 52 displays a sterile monitor cover 328 sized and proportioned to fit over the monitor assembly 22 and extend downward past the upper most plane of vertical rack assembly 12. The sterile monitor cover 328 is manufactured from a single piece of clear material to maintain visibility of the monitor 52 when draped. The top edge 330 is heat sealed to create an enclosed bag with the bottom edge 332 having an elastic band 334 that cinches around the monitor assembly 22. The sterile monitor cover 328 may be stand-alone or alternatively may be attached or integrally formed with the sterile identification barrier 14. A smaller version of the sterile monitor cover 328 may be provided as a cover for the portable electronic device 17 (e.g. tablet computer, smart phone, etc) that is used to interface with the standardization software platform 16.”). It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of invention to combine placement of the touch screen in the bag as shown by Gerstner in the digital stereoscopic display of the medical imager of Asaban in order to provide a sterile condition of the touch screen of the user interface. Regarding claim 7. (new) Asaban in view of Gerstner shows the method of claim 3, and Asaban further disclose which further comprises comparing the workflow step data of electronically to a predetermined allocation of time slots and sending a message based on the comparison (0066, 0115 and 0127; Figure 7). Regarding claim 8. (new) Asaban in view of Gerstner shows the method of claim 3, and Asaban further disclose which further comprises placing a time stamp and a bookmark on a workflow step (0081 and 0088; Figures 3-4). Claim Rejections - 35 USC § 103 Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Asaban as applied to claim 1 above, and further in view of SIM et al. (US 20200226481 A1, hereinafter “SIM”). Regarding claim 9. (new) Asaban disclose the method of claim 1, but failed to disclose which further comprises recognizing a billing process from the acquired video data and storing data representing the billing process. SIM, however, in the same field of endeavor, shows method comprises recognizing a billing process from the acquired video data and storing data representing the billing process (0134; Figures 23-27; “[0134] … Accordingly, in some variations the AI environment provides a platform for enabling medical institutions or other entities associated with a user group (e.g., hospital) to quickly build, customize, and/or update their own application modules using their own medical content records. Suitable medical content records may, for example, include drug information, inventory information, pricing information, medical procedure codes (e.g., ICD, surgical codes, DRG codes, etc.), billing and/or reimbursement codes, hospital guidelines, hospital protocols, dosing regimens, images, videos, etc. Examples of customized medical content application modules based on various example of medical content records are described below (e.g., with respect to FIGS. 23-27).”). It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of invention to combine billing process of SIM using the video from the procedure in the digital stereoscopic display of the medical imager of Asaban in order to provide a more effective ways of billing or creating invoicing for medical treatment received by the client. Allowable Subject Matter Claim 15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments with respect to claim 1 as amended have been considered but are moot based on the new ground of rejection. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Regarding claim 1. (currently amended) Hausen (US 20230232105 A1, hereinafter “Hausen”) discloses a method for video recording of a surgical procedure performed on a patient (0065 and 0070-0071; Figures 4-9; “[0071] … any camera that is useful for recording the particular subject of the video may be connected directly or indirectly to the base station 70, and may be recorded and utilized like any other input to the base station 70. … a camera 92 may be included in, or provided as, glasses or sunglasses wearable by the user, such as the RAY-BAN? STORIES? smart glasses of Luxottica USA LLC S.p.A. of Milan, Italy. As another example, a camera 92 may be positioned in an ambulance to view a patient during transport. ... the camera 91 may be a standard body-mounted camera worn by an EMT, paramedic, firefighter or law enforcement officer.”), comprising: supporting a video camera from the body of a medical professional (Figures 4, 11 and 16C; ‘helmet’); aiming the video camera at the surgical site (0088 and 0093-0099; Figures 16-18; “[0088] … software may be utilized to align the field of view of the camera module 30 with the user's ocular line of sight. The software may reside in the wearable unit 50, in the camera module 30, and/or elsewhere. Referring to FIG. 17, the software may operate according to an exemplary process 500. At box 502, the microphone 48 detects a voice command for voice activation of detection of a gesture. The voice command may be a word or phrase that is preset by the manufacturer, or may be a word or phrase that may be set by the user. By way of example and not limitation, the word may be “field” or the phrase may be “field of view”. According to some embodiments, the microphone 48 listens constantly, or effectively constantly at very short intervals, for the voice activation command. … the microphone 48 does not listen constantly, but is triggered to listen by a button or other mechanism or input that signals the process 500 that the user is ready to give a voice command. The user, an assistant to the user, or in surgical applications, a nurse or other professional in the operating room may press the button or otherwise provide input to the process 500. According to other embodiments, instead of voice activation in box 502, the user presses a button or provides other input to the process 500 for activation of detection of a gesture. Such embodiments that involve pressing a button or other physical input may be particularly useful, as well as cost-effective, for non-surgical use of the process where there is no sterile field to maintain.”); acquiring video data of the surgical procedure wirelessly transmitting the acquired video data to a processor having a user interface (0068 and 0070; Figures 7-9; “[0070] Optionally, referring also to FIGS. 8-9, where the user is a surgeon, a camera 92 may be attached to one or more surgical tools 90 used by a surgeon in order to improve accuracy in the use of that tool. Very small cameras are known. As one example, the Omnivision OV6948 camera (Omni Vision Technologies, Inc.; Santa Clara, Calif.) is only 0.575?0.575?0.232 mm in size. Such tiny cameras are inexpensive enough that it can be incorporated into a single-use medical device, obviating the need for sterilization between procedures. Additionally, by attaching a camera to one or more surgical tools for open surgery, the need for a separate endoscope or other relatively-bulky camera (and its support equipment) may be eliminated, reducing costs and reducing the amount of equipment needed in the operating room. A camera 92 may be attached to any surgical tool 90. For example, FIGS. 8-9 show a camera 92 attached to a standard aortic cutter 92. The inclusion of a camera 92 on the aortic cutter 90 allows the user a better view of the tissue to be treated, but also may allow the user to inspect the hole made by the aortic cutter 90 in tissue to determine whether that hole includes nicks or other abnormalities that may affect a later part of a surgical procedure. This ability to inspect closely the result of use of any surgical tool provides additional assurance that the intended procedure was performed as intended and as expected. Data from the camera 92 is transmitted to the wearable unit 50 in the same manner or a similar manner as described above with regard to the camera module 30. A cable 94 may transmit data from the camera 92 to the wearable unit 50. According to other embodiments, the camera 92 transmits data wirelessly to the wearable unit 50. ...”). Hausen failed to disclose exclusively the user interface is touch screen; and re-aiming the video camera from the touch screen user interface. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASMAMAW TARKO whose telephone number is (571)272-9205. The examiner can normally be reached Monday -Friday 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASMAMAW G TARKO/ Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Apr 17, 2025
Non-Final Rejection — §102, §103
Sep 19, 2025
Response Filed
Dec 01, 2025
Examiner Interview Summary
Dec 01, 2025
Examiner Interview (Telephonic)
Feb 12, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12529288
SYSTEMS AND METHODS FOR ESTIMATING RIG STATE USING COMPUTER VISION
2y 5m to grant Granted Jan 20, 2026
Patent 12511768
METHOD AND APPARATUS FOR DEPTH IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12506865
SYSTEMS AND METHODS FOR REDUCING A RECONSTRUCTION ERROR IN VIDEO CODING BASED ON A CROSS-COMPONENT CORRELATION
2y 5m to grant Granted Dec 23, 2025
Patent 12498482
CAMERA APPARATUS
2y 5m to grant Granted Dec 16, 2025
Patent 12469164
VEHICLE EXTERNAL DETECTION DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
81%
With Interview (+9.3%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 395 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month