Prosecution Insights
Last updated: April 19, 2026
Application No. 18/391,616

INTELLIGENT MEDICAL REPORT GENERATION

Final Rejection §101§102
Filed
Dec 20, 2023
Examiner
OBEID, FAHD A
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Stryker Corporation
OA Round
2 (Final)
28%
Grant Probability
At Risk
3-4
OA Rounds
5y 4m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
63 granted / 221 resolved
-23.5% vs TC avg
Strong +49% interview lift
Without
With
+49.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
17 currently pending
Career history
238
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 221 resolved cases

Office Action

§101 §102
DETAILED ACTION This communication is in response to application filed on 9/3/2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 2, 5, 17, and 18 have been cancelled. Claims 1, 3-4, 6-16, and 19-20 are currently pending and have been examined. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-4, 6-16, and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category The claims are directed to a process (method), a machine (system), and a manufacture (computer-readable medium). All are statutory categories under §101. Step 2A Prong One: Judicial Exception (Abstract Idea) Are the claims directed to a judicial exception (e.g., abstract idea, law of nature, or natural phenomenon)? The claims recite a method, system, and computer-readable medium for generating a medical report by: Identifying a user profile with preferences Using machine learning models to select images/text for a medical report Generating a draft report with auto-generated and suggested content Displaying a GUI for user selection and updating the report These steps, individually and as an ordered combination, amount to collecting, analyzing, and displaying data, and organizing human activity—concepts that are identified in the USPTO’s 2019 Revised Patent Subject Matter Eligibility Guidance as abstract ideas (e.g., “mental processes,” and “organizing human activity,”. The core of the claims is: automated organization, selection, and presentation of information (medical report content) based on user preferences and machine learning analysis, with user interaction to update the report. Per USPTO guidance, “organizing human activity,” “collecting, analyzing, and displaying data,” and “mental processes” are abstract ideas. Therefore, the claims are directed to an abstract idea, namely: Collecting and analyzing data (images, user preferences) Organizing and presenting information (medical reports) Using generic computer implementation for these steps Step 2A Prong Two: Integration into a Practical Application Do the claims integrate the abstract idea into a practical application? The claims do not recite any improvement to the functioning of a computer or another technology. The use of machine learning models is recited at a high level of generality, without specifying any particular technical solution or improvement to the field of machine learning or computer technology. The steps are performed on generic computer components (processors, memory, display) and do not require any specialized hardware or transformation of an article. The claimed invention is thus not integrated into a practical application beyond the abstract idea itself. The claims do not recite any specific improvement to computer technology or another technical field. The steps (identifying preferences, analyzing images, generating and displaying reports, user selection) are performed using generic computer components. The claims do not require a particular machine (beyond generic processors, display, etc.) or effect a transformation of an article. The claims do not add a meaningful limitation beyond the abstract idea itself. Therefore, the claims do not integrate the abstract idea into a practical application. The recited steps are performed on generic computer components and do not improve the functioning of the computer or another technology. Step 2B: Inventive Concept Do the claims add significantly more than the abstract idea (i.e., do they include an “inventive concept”)? When the limitations are considered both individually and as an ordered combination, they do not amount to significantly more than the abstract idea itself. The use of machine learning models for selecting and annotating images and text is a generic computer implementation of organizing and presenting information. The remaining steps (identifying user preferences, generating and displaying content, receiving user selection, updating the report) are routine, conventional, and well-understood activities in the field of computer-implemented information management. There is no indication that the claims recite any unconventional hardware, technical improvement, or other inventive concept sufficient to transform the abstract idea into a patent-eligible application. The claims recite the use of “machine learning models” for selecting images/text, but do not specify any particular algorithm, technical improvement, or non-conventional use of machine learning. The remaining limitations (user profile, GUI, updating content) are routine and conventional in the context of computer-implemented information organization and presentation. The claims do not recite any unconventional hardware or technical solution. Therefore, the claims do not add an inventive concept beyond the abstract idea. The use of machine learning for information selection and report generation is a generic computer implementation of organizing and presenting data. Regarding dependent claims: 3-4 and 6-16 Each dependent claim ultimately depends from an independent claim (claims 1, 19, or 20) that has been found to be directed to an abstract idea—namely, the collection, analysis, organization, and presentation of information (medical report content) using generic computer components and machine learning models, which falls within the categories of “mental processes,” “organizing human activity,” as identified in the USPTO’s subject matter eligibility guidance. Claims 3-4 and 6-16 are rejected under 35 U.S.C. §101 as being directed to a judicial exception (abstract idea) without significantly more. The additional limitations in these dependent claims, such as specifying time windows, identifying medical events or image descriptors, associating audio, merging user-provided text, updating a graphical user interface, or updating a user profile, represent routine and conventional activities in the field of computer-implemented information management. The claims do not recite any technical improvement, unconventional hardware, or inventive concept sufficient to render the subject matter patent-eligible. Accordingly, the claims are directed to an abstract idea and do not recite additional elements that amount to significantly more than the abstract idea itself. The claims are therefore not directed to patent-eligible subject matter under 35 U.S.C. §101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-4, 6-16, and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wolf et al. (US 20210298869). Regarding Claims 1, 19, and 20: Wolf teaches a computer-implemented method, comprising: identifying, by one or more processors of a computing system, a profile associated with a user comprising user preferences for generating a medical report associated with the medical procedure (“…enables the health care provider to alter the phase names in a post-operative report.” “…review and modify the generated report.” “…input of an identifier of a health care provider…¶¶ 412, 469, 470, Fig. 23), wherein the user preferences comprise at least one preference associated with a content for automatic inclusion in the medical report (…the interface may allow a healthcare provider to update the phase names by typing new phase names using a keyboard. …particular fields may be locked as unalterable without administrative rights ¶¶ 469, 470, 485, 486); automatically identifying, by the one or more processors using at least one first machine learning model, one or more images of the medical procedure for automatic inclusion in the medical report and one or more images of the medical procedure for suggested inclusion in the medical report (“…analyzing a plurality of frames of the surgical footage to derive image-based information for populating a post-operative report…” “…analyzing the surgical footage to identify one or more phases of the surgical procedure…” “…computer analysis may include using one or more image recognition algorithms to identify features of one or more frames of the video footage” ¶¶ 412, 428, 431, 432, 435, 437, 439, 414, Figs. 4, 25), wherein the one or more images identified for automatic inclusion in the medical report are identified based on the at least one preference associated with the type of content for automatic inclusion in the medical report (“…the interface may allow a healthcare provider to update the phase names…” “…user may be able to select or alter content in the report…” “…fields may be locked as unalterable without administrative rights.” ¶¶ 469, 412, 470, 485); automatically generating, by the one or more processors using at least one second machine learning model, text associated with the one or more images identified for automatic inclusion in the medical report and text associated with the one or more images identified for suggested inclusion in the medical report (“…analyzing a plurality of frames…to derive image-based information for populating a post-operative report…” “…determining an event name of the identified surgical event…” “…machine-learning model…may output the name of the event corresponding to the features within the footage.” ¶¶ 412, 428, 432, 433, 434, 450, 451); generating, by the one or more processors, a draft medical report comprising auto-generated content describing the medical procedure, wherein the auto-generated content comprises the one or more images identified for automatic inclusion in the medical report and the text associated with the one or more images identified for automatic inclusion in the medical report (“…causing the derived image-based information to populate the post-operative report of the surgical procedure.” “…auto-populated post-operative report…” ¶¶ 412, 413, 415, 416, 417, 418, 419, 469, Fig. 23); displaying, on a display screen, a graphical user interface comprising the draft medical report, wherein the graphical user interface comprises a first region comprising the one or more images identified for automatic inclusion in the medical report and the text associated with the one or more images identified for automatic inclusion in the medical report and a second region comprising the one or more images identified for suggested inclusion in the medical report and the text associated with the one or more images identified for suggested inclusion in the medical report (“…displaying the video in a video playback region…timeline…markers…” “…post-operative report may be partitioned into different portions indicated by tabs…” ¶¶ 412, 413, 416, 418, 469, Fig. 23); receiving, via the graphical user interface, a user selection of at least one of the one or more images identified for suggested inclusion in the medical report or the text associated with the one or more images identified for suggested inclusion in the medical report (“…interface may allow a healthcare provider to update the phase names by typing new phase names using a keyboard.” “…the healthcare provider may be enabled to alter some or all fields within the post-operative report.” ¶¶ 412, 469, 470, 485, Fig. 23); and updating, by the one or more processors, the first region of the graphical user interface based on the user selection (“…enables the health care provider to alter the phase names in a post-operative report.” “…update the phase names by typing new phase names…” ¶¶ 469, 470, 485, 488, Fig. 23). Regarding Claim 3: Wolf teaches the method of claim 1, further comprising: identifying one or more time windows associated with the medical procedure; and capturing an image during at least one of the one or more time windows, wherein the one or more images identified for automatic inclusion in the medical report comprise the captured image. User preferences include time windows for capturing images (“…determining at least a beginning of each identified phase; associating a time marker with the beginning of each identified phase…” “…the time marker may be recorded in a number of ways, including, a time elapsed from the beginning of the surgical procedure, the time as measured by the time of day, or a time as it relates to some other intraoperative recorded time.” “…determining at least a beginning of the at least one phase; and wherein the derived image-based information is based on the determined beginning.” ¶¶ 434, 435, 437, 438, 468). Regarding Claim 4: Wolf teaches the method of claim 1, wherein the user preferences comprise one or more medical report preferences indicating time windows of the medical procedure during which the user prefers to capture images. User preferences include time windows for image capture (“The interface may allow a healthcare provider to update the phase names by typing new phase names using a keyboard… particular fields may be locked as unalterable without administrative rights.” “…determining at least a beginning of each identified phase; associating a time marker with the beginning of each identified phase…” “…user input might include entry or selection of a phase, event, procedure, or device used, which input may be associated with particular video footage (e.g., for example through a lookup table or other data structure). “…interface may allow a healthcare provider to update the phase names…” “…user may input a particular frame number, timestamp, range of times, start times and/or stop times, or any other information that may identify a video footage location.” ¶¶ 469, 434, 435, 437, 485”). Regarding Claim 6: Wolf teaches the method of claim 1, further comprising: displaying, on the display screen, the graphical user interface comprising the updated first region. Displaying updated GUI with report (“…interface may allow a healthcare provider to update the phase names…” “…displaying the video in a video playback region… timeline… markers…” “…post-operative report may be partitioned into different portions indicated by tabs…” “…displaying the updated report…” ¶¶ 412, 413, 469, 470, Fig. 23). Regarding Claim 7: Wolf teaches the method of claim 1, wherein updating the first region of the graphical user interface comprises: adding the at least one of the one or more images identified for suggested inclusion in the medical report to the first region of the graphical user interface. Adding suggested image to report/region (“…interface may allow a healthcare provider to update the phase names by typing new phase names using a keyboard.” “…healthcare professional may be enabled to alter some or all fields within the post-operative report.” “…adding or changing content in the report…” ¶¶ 46], 470, 485, 488, Fig. 23). Regarding Claim 8: Wolf teaches the method of claim 1, further comprising: determining one or more medical events associated with the medical procedure based on prior performances of the medical procedure; and updating the profile of the user associated with the medical procedure, to store data indicating the one or more medical events. Determining medical events based on prior performances; updating user profile (“…machine learning model may be trained using training examples to generate markers for videos, and the trained machine learning model may be used to analyze the video and generate markers for that video.” “…interface may allow a healthcare provider to update the phase names…” “…fields may be locked as unalterable without administrative rights.” ¶¶ 412, 414, 450, 451, 469, 470, 485, 486). Regarding Claim 9: Wolf teaches the method of claim 8, further comprising: detecting, within a video of the medical procedure, at least some of the one or more medical events; and selecting one or more images depicting the at least some of the one or more medical events, wherein the auto-generated content comprises at least some of the one or more images. Detecting medical events in video; selecting images depicting events ( “…analyzing a plurality of frames of the surgical footage to derive image-based information for populating a post-operative report…” “…analyzing the surgical footage to identify a surgical event within the surgical footage…” “…machine-learning model…may output the name of the event corresponding to the features within the footage.” ¶¶ 412, 414, 432, 439, 450, 451, 455). Regarding Claim 10: Wolf teaches the method of claim 1, further comprising: training a machine learning model to identify one or more image descriptors associated with phases of the medical procedure; and capturing, from video of the medical procedure, one or more images corresponding to the phases of the medical procedure, the auto-generated content comprising at least some of the one or more captured images. Training ML model to identify image descriptors for phases; capturing images (“…machine learning model may be trained using training examples to generate markers for videos, and the trained machine learning model may be used to analyze the video and generate markers for that video.” “…analyzing surgical footage to identify phases of the surgical procedure based on detected interactions between medical instruments and biological structures…” ¶¶ 414, 415, 416, 417, 418, 450, 451, 455). Regarding Claim 11: Wolf teaches the method of claim 10, wherein the one or more image descriptors comprise at least one of objects, environmental factors, or contextual information associated with the phases of the medical procedure. Image descriptors include objects, environmental factors, contextual info (“…computer analysis may include using one or more image recognition algorithms to identify features of one or more frames of the video footage…” “…identifying features such as the size of anatomical structures that are being operated upon, the size of a patient, the estimated age of the patient, gender of the patient, a race of the patient, or any other characteristics related to the patient.” ¶¶ 415, 416, 417, 418, 422, 432, 433). Regarding Claim 12: Wolf teaches the method of claim 11, further comprising: generating training data comprising images that were captured during prior performances of the medical procedure for training the machine learning model; and storing at least one of the trained machine learning model or the training data in association with the profile of the user that performed the medical procedure. Generating training data from prior images; storing model/training data with user profile (“…machine learning model may be trained using training examples…” “…profile associated with a user comprising user preferences…” “…fields may be locked as unalterable without administrative rights.” ¶¶ 414, 415, 416, 417, 418, 469, 485). Regarding Claim 13: Wolf teaches the method of claim 11, further comprising: detecting, within video of the medical procedure, using the trained machine learning model, at least one of the one or more image descriptors; and selecting one or more images from the video of the medical procedure depicting the at least one of the one or more image descriptors, the auto-generated content comprising at least some of the one or more selected images. Detecting descriptors using ML model; selecting images for report (“…machine learning model may be used to analyze the video and generate markers for that video.” “…analyzing the surgical footage to identify features, events, phases…” ¶¶ 412, 414, 415, 416, 417, 418, 432, 433, 450, 451). Regarding Claim 14: Wolf teaches the method of claim 1, further comprising: determining time windows associated with phases of the medical procedure; and detecting an image captured at a time different than the time windows, wherein the one or more images identified for automatic inclusion in the medical report comprise the detected image. Determining phase time windows; detecting images outside windows (“…determining at least a beginning of each identified phase; associating a time marker with the beginning of each identified phase…” ¶¶ 435, 437, 438, 468). Regarding Claim 15: Wolf teaches the method of claim 1, further comprising: associating audio captured during the medical procedure with an image captured during a time window associated with a phase of the medical procedure. Associating audio with images during a phase (“…the post-operative report may include multiple frames of surgical footage, audio data, image data, text data (e.g., doctor notes) and the like.” “…fields may contain different types of information.” ¶¶ 412, 413, 414, 415, 417, 418, 419, 450, 451, Fig. 23). Regarding Claim 16: Wolf teaches the method of claim 1, further comprising: receiving user-provided text; merging the user-provided text with the text associated with the one or more images identified for automatic inclusion in the medical report, wherein the draft medical report comprises the merged text. Merging user-provided text with auto-generated text (“…interface may allow a healthcare provider to update the phase names by typing new phase names using a keyboard.” “…healthcare professional may be enabled to alter some or all fields within the post-operative report.” ¶¶ 469, 470, 485, 488, Fig. 23). Response to Arguments Applicant’s remarks/arguments has been addressed in the rejections above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAHD A OBEID whose telephone number is (571)270-3324. The examiner can normally be reached Monday-Friday 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAHD A OBEID/Supervisory Patent Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §101, §102
Aug 14, 2025
Interview Requested
Aug 28, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Sep 03, 2025
Response Filed
Mar 30, 2026
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 11315099
OVER THE AIR UPDATE OF PAYMENT TRANSACTION DATA STORED IN SECURE MEMORY
2y 5m to grant Granted Apr 26, 2022
Patent 9355565
CROSSING TRAFFIC DEPICTION IN AN ITP DISPLAY
2y 5m to grant Granted May 31, 2016
Patent 9357081
METHOD FOR CHOOSING AN ALTERNATE OFFLINE CHARGING SYSTEM DURING AN OVERLOAD AND APPARATUS ASSOCIATED THEREWITH
2y 5m to grant Granted May 31, 2016
Patent 8660750
NULL
2y 5m to grant Granted Feb 25, 2014
Patent 8595099
NULL
2y 5m to grant Granted Nov 26, 2013
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
28%
Grant Probability
78%
With Interview (+49.3%)
5y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 221 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month