Prosecution Insights
Last updated: April 19, 2026
Application No. 18/806,978

METHODS AND SYSTEMS FOR DYNAMICALLY LOADING ALGORITHMS BASED ON SURGERY INFORMATION

Final Rejection §103
Filed
Aug 16, 2024
Examiner
NAJARIAN, LENA
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Stryker Corporation
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
5y 0m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
178 granted / 464 resolved
-13.6% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
41 currently pending
Career history
505
Total Applications
across all art units

Statute-Specific Performance

§101
26.9%
-13.1% vs TC avg
§103
31.9%
-8.1% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
25.4%
-14.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 464 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant This communication is in response to the amendment filed 11/25/25. Claims 1, 18, and 19 have been amended. Claims 1-19 are pending. Claim Objections Claims 13, 16, and 17 are objected to because of the following informalities: change “wherein providing the identified machine learning model to the target device“ to “wherein loading the identified machine learning model onto the target device…. “ Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 7-9, 13-15, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coppersmith, III et al. (US 2019/0251723 A1) in view of Stern et al. (US 2015/0120287 A1). (A) Referring to claim 1, Coppersmith discloses A method for dynamically loading machine learning models onto a target device for a surgical procedure, comprising (para. 39 & 61-63 of Coppersmith; the system architecture 100 may include a computing device 102 communicatively coupled to a training engine 104 and/or one or more machine learning models 106 via a network 108. The AI may include machine learning models, such as neural networks (e.g., deep learning networks, generative adversarial networks, convolutional neural networks, recurrent neural networks, fully connected neural networks, etc.). AI may be used to predict a range of visual outcomes (e.g., predicted images) based on a large set of training data that includes before and after images of patients (e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc.) on which medical procedures were performed. The medical procedures may include non-invasive, minimally-invasive, and surgical. The medical procedures may include Cool Sculpting, plastic surgery, tummy-tucks, liposuction, face-lift, skin grafts, and the like.): at one or more electronic devices implementing a software program (para. 13, 14, 52, and 107 of Coppersmith; a system for capturing images and generating predicted images may include a computing device (e.g., a smartphone or tablet with image capture capabilities), software (e.g., the patient visualization application including the AI, imaging operations, and display operations), and/or display/video screen (e.g., tablet, computer monitor, TV). The method 1000 may be performed by processing logic that may include hardware (circuitry, dedicated logic, etc.), software, or a combination of both. The method 400 and/or each of their individual functions, subroutines, or operations may be performed by one or more processors of a computing device (e.g., computing device 100 of FIG. 1) implementing the method 1000.): identifying a user profile associated with the surgical procedure (para. 61, 71, & 115-118 of Coppersmith; The dataset may include multiple before and after images of patients that had medical procedures performed, for example. In some embodiments, the dataset is labeled for the before and after images, and the labels include a body region that was operated on, the one or more medical procedures that were performed, the physician that performed the medical procedures, the type of product used in the medical procedures, specific characteristics of the patient (e.g., gender, race, skin color, previous medical procedures performed, health information, allergies, etc.), and so forth.); determining, based on the identified user profile, a plurality of candidate procedures (Fig. 11 and para. 115-118 of Coppersmith; output one or more suggested medical procedures); obtaining data from one or more devices in a surgical environment associated with the surgical procedure (para. 39-41 of Coppersmith; The medical procedures may include non-invasive, minimally-invasive, and surgical. The medical procedures may include Cool Sculpting, plastic surgery, tummy-tucks, liposuction, face-lift, skin grafts, and the like. There may be similar medical procedures provided by a multitude of companies that may be chosen and the effects of the medical procedures selected may be represented in the modified predicted images. The techniques may obtain a before image of a patient and select various medical procedures to apply to the patient and predict (along a scale or range) how the particular patient will respond to the treatment of medical procedures (e.g., how their appearance will change after various procedures and over time). After the medical procedures are performed, one or more actual after images of the body region where the medical procedures were performed may be obtained. The actual after image may be compared with the modified predicted image that was generated by the one or more machine learning models.); selecting at least one procedure from the plurality of candidate procedures based on: the data obtained from the one or more devices, and one or more weights associated with the data obtained from the one or more devices (para. 39-41 of Coppersmith; The AI may include machine learning models, such as neural networks (e.g., deep learning networks, generative adversarial networks, convolutional neural networks, recurrent neural networks, fully connected neural networks, etc.). AI may be used to predict a range of visual outcomes (e.g., predicted images) based on a large set of training data that includes before and after images of patients (e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc.) on which medical procedures were performed. The medical procedures may include non-invasive, minimally-invasive, and surgical. The medical procedures may include Cool Sculpting, plastic surgery, tummy-tucks, liposuction, face-lift, skin grafts, and the like. There may be similar medical procedures provided by a multitude of companies that may be chosen and the effects of the medical procedures selected may be represented in the modified predicted images. The techniques may obtain a before image of a patient and select various medical procedures to apply to the patient and predict (along a scale or range) how the particular patient will respond to the treatment of medical procedures (e.g., how their appearance will change after various procedures and over time). The AI may be used to generate a series of recommended medical procedures to help patients obtain a particular look. The patient may schedule and undergo the medical procedures. The one or more machine learning models may implement supervised learning by modifying various parameters (e.g., weights, biases, etc.) based on the comparison to enhance the accuracy of the modified predicted images.); identifying a machine learning model based on the selected at least one procedure (para. 41 of Coppersmith; the patient may schedule and undergo the medical procedures. After the medical procedures are performed, one or more actual after images of the body region where the medical procedures were performed may be obtained. The actual after image may be compared with the modified predicted image that was generated by the one or more machine learning models. The one or more machine learning models may implement supervised learning by modifying various parameters (e.g., weights, biases, etc.) based on the comparison to enhance the accuracy of the modified predicted images.). Coppersmith does not expressly disclose loading the identified machine learning model onto the target device for execution and removing the identified machine learning model from the target device after execution to increase available storage space on the target device. Stern discloses loading the identified machine learning model onto the target device for execution and removing the identified machine learning model from the target device after execution to increase available storage space on the target device (para. 10, 11, 15, 18, 26, and 32-34 of Stern; A local device or a network-based service can automatically manage speech processing models stored on the device, in order to fetch, provision, download, update, or remove speech processing models as context factors change. Store speech processing models on the device automatically, based on a set of factors, such as geographic location, loaded app content, usage patterns, or other predictive indicators. The same set of factors can be used to automatically determine speech processing models that are unlikely to be used and so can be removed. If the local device determines that multiple speech processing models are below the threshold likelihood of use based on the change in context, the local device can further determine priority rankings for the multiple speech processing models, and remove the multiple speech processing models from the mobile device based on the priority rankings. The local device can alternately remove the multiple speech processing models from the mobile device based on the priority rankings until a threshold amount of storage space on the mobile device is freed. The local device can detect a change in context, determine that a speech processing model stored on the mobile device is below a threshold likelihood of use based on the change in context, and remove at least a portion of the speech processing model from the mobile device in response to the change in context. The threshold of likelihood of use can be based on available storage space on the local device for storing speech processing models). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Stern within Coppersmith. The motivation for doing so would have been to identify which models are needed and make them available without overwhelming the limited storage space (para. 10 of Stern). (B) Referring to claim 2, Coppersmith discloses wherein the identified user profile comprises: an identity of a user, one or more specialties of the user, one or more device settings, one or more procedure types, or any combination thereof (para. 61, 71, & 115-118 of Coppersmith). (C) Referring to claim 3, Coppersmith discloses wherein the user profile is identified based on: a user input indicative of the identity of the user, audio data associated with the surgical environment, image data associated with the surgical environment, location information associated with the user, or any combination thereof (para. 61, 71, & 115-118 of Coppersmith). (D) Referring to claim 4, Coppersmith discloses wherein the one or more devices comprise at least one imager, and wherein the data obtained from the one or more devices comprises device mode data associated with the at least one imager (para. 39-41, 61, & 97 of Coppersmith). (E) Referring to claim 5, Coppersmith discloses wherein the device mode data comprises: one or more camera specialty settings, one or more image modes, or any combination thereof (para. 61, 65, and 97 of Coppersmith). (F) Referring to claim 7, Coppersmith discloses wherein the one or more devices comprise at least one sensor, and wherein the data obtained from the one or more devices comprises contextual information obtained by the sensor (para. 42, 43, and 98 of Coppersmith). (G) Referring to claim 8, Coppersmith discloses wherein the at least one sensor comprises an image sensor, an audio sensor, or any combination thereof (para. 42, 43, and 98 of Coppersmith). (H) Referring to claim 9, Coppersmith discloses wherein the one or more weights are determined based on a time and/or a time duration associated with at least a portion of the data from the one or more devices (para. 63 of Coppersmith). (I) Referring to claim 13, Coppersmith discloses wherein providing the identified machine learning model to the target device for execution comprises: selecting a version of the identified machine learning model from a plurality of versions of the identified machine learning model; and providing the selected version of the identified machine learning model to the target device (para. 42, 61, and 108-111 of Coppersmith) (J) Referring to claim 14, Coppersmith discloses wherein the version of the identified machine learning model is selected based on a device type of the target device (para. 61, 71, and 108-111 of Coppersmith). (K) Referring to claim 15, Coppersmith discloses wherein the version of the identified machine learning model is selected based on medical history of one or more patients (para. 61, 71, and 108-111 of Coppersmith). (L) Referring to claim 17, Coppersmith discloses wherein the target device comprises a processing device, and wherein providing the identified machine learning model to the target device comprises: providing a set of executable instructions associated with the identified machine learning model to the processing device (para. 107, 42, & 43 of Coppersmith). (M) Claims 18 and 19 differ from claim 1 by reciting “A system for dynamically loading machine learning models onto a target device for a surgical procedure, comprising: one or more processors; one or more memories; and one or more programs, wherein the one or more programs are stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs including instructions for…” (para. 39, 61-63, 107, 130, and 136 of Coppersmith) and “A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors of an electronic device, cause the device to…” (para. 17 and 131 of Coppersmith). The remainder of claims 18 and 19 repeat substantially the same limitations as claim 1, and are therefore rejected for the same reasons given above. Claim(s) 6, 11, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coppersmith, III et al. (US 2019/0251723 A1) in view of Stern et al. (US 2015/0120287 A1), and further in view of Shelton, IV et al. (US 2019/0207857 A1). (A) Referring to claim 6, Coppersmith and Stern do not disclose wherein the one or more devices comprise at least one surgical instrument, and wherein the data obtained from the one or more devices is indicative of whether the at least one surgical instrument is active. Shelton discloses wherein the one or more devices comprise at least one surgical instrument, and wherein the data obtained from the one or more devices is indicative of whether the at least one surgical instrument is active (para. 290 of Shelton). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Shelton within Coppersmith and Stern. The motivation for doing so would have been to process information from the devices (para. 290 of Shelton). (B) Referring to claim 11, Coppersmith and Stern do not disclose wherein the target device is selected from a plurality of candidate target devices based on a state of the target device and a storage capacity of the target device. Shelton discloses wherein the target device is selected from a plurality of candidate target devices based on a state of the target device and a storage capacity of the target device (para. 240, 241, 340, 360, & 361 of Shelton). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Shelton within Coppersmith and Stern. The motivation for doing so would have been to use devices that are available and appropriate for the procedure (para. 261 & 240 of Shelton). (C) Referring to claim 12, Coppersmith and Stern do not disclose wherein the plurality of candidate devices comprises: a display, a camera, a programmable device, a processing device, or any combination thereof. Shelton discloses wherein the plurality of candidate devices comprises: a display, a camera, a programmable device, a processing device, or any combination thereof (para. 240 and 241 Shelton). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Shelton within Coppersmith and Stern. The motivation for doing so would have been to choose the device depending on the type of surgical procedure (para. 240 of Shelton). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coppersmith, III et al. (US 2019/0251723 A1) in view of Stern et al. (US 2015/0120287 A1), and further in view of Alvi et al. (US 9,788,907 B1). (A) Referring to claim 10, Coppersmith and Stern do not disclose wherein the one or more weights are determined based on device type data associated with at least one of the one or more devices. Alvi discloses wherein the one or more weights are determined based on device type data associated with at least one of the one or more devices (col. 12, line 56-col. 13, line 7 and col. 27, line 38-col. 28, line 21 of Alvi) Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Alvi within Coppersmith and Stern. The motivation for doing so would have been to recommend a preferred surgical route (col. 12,lines 56-59 of Alvi). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coppersmith, III et al. (US 2019/0251723 A1) in view of Stern et al. (US 2015/0120287 A1), and further in view of Guim Bernat et al. (US 2020/0285523 A1). (A) Referring to claim 16, Coppersmith and Stern do not disclose wherein the target device comprises a programmable device, and wherein providing the identified machine learning model to the target device comprises: providing a bitstream associated with the identified machine learning model to the programmable device. Guim Bernat discloses wherein the target device comprises a programmable device, and wherein providing the identified machine learning model to the target device comprises: providing a bitstream associated with the identified machine learning model to the programmable device (para. 31 of Guim Bernat). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Guim Bernat within Coppersmith and Stern. The motivation for doing so would have been to perform a set of operations according to a defined configuration (para. 31 of Guim Bernat). Response to Arguments Applicant’s arguments, see pages 9-10, filed 11/25/25, with respect to claims 1, 18, and 19 have been fully considered and are persuasive. The 101 rejection of claims 1-19 has been withdrawn. Applicant’s arguments with respect to claim(s) 1, 18, and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's additional arguments filed 11/25/25 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response field 11/25/25. (1) Coppersmith fails to disclose or suggest "identifying a machine learning model based on the selected at least one procedure," as recited in amended claim 1. (A) As per the first argument, the broadest reasonable interpretation of “identifying a machine learning model based on the selected at least one procedure” would include the “one or more machine learning models” with modified “various parameters (e.g., weights, biases, etc.),” disclosed in paragraph 41 of Coppersmith. Para. 41 of Coppersmith discloses: “the patient may schedule and undergo the medical procedures. After the medical procedures are performed, one or more actual after images of the body region where the medical procedures were performed may be obtained. The actual after image may be compared with the modified predicted image that was generated by the one or more machine learning models. The one or more machine learning models may implement supervised learning by modifying various parameters (e.g., weights, biases, etc.) based on the comparison to enhance the accuracy of the modified predicted images.” Note the claim lacks detail regarding what the “identifying” step entails or how the model is identified. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENA NAJARIAN whose telephone number is (571)272-7072. The examiner can normally be reached Monday - Friday 9:30 am-6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LENA NAJARIAN/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Aug 27, 2025
Non-Final Rejection — §103
Oct 31, 2025
Interview Requested
Nov 12, 2025
Examiner Interview Summary
Nov 25, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573489
INFUSION PUMP LINE CONFIRMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12562247
PATIENT DATA MANAGEMENT PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12542208
ALERT NOTIFICATION DEVICE OF DENTAL PROCESSING MACHINE, ALERT NOTIFICATION SYSTEM, AND NON-TRANSITORY RECORDING MEDIUM STORING COMPUTER PROGRAM FOR ALERT NOTIFICATION
2y 5m to grant Granted Feb 03, 2026
Patent 12488880
Discovering Context-Specific Serial Health Trajectories
2y 5m to grant Granted Dec 02, 2025
Patent 12488894
SYSTEM AND METHODS FOR MACHINE LEARNING DRIVEN CONTOURING CARDIAC ULTRASOUND DATA
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
78%
With Interview (+39.3%)
5y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 464 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month