Prosecution Insights
Last updated: April 19, 2026
Application No. 18/710,704

MONITORING AN ENTITY IN A MEDICAL FACILITY

Non-Final OA §103§112
Filed
May 16, 2024
Examiner
TITCOMB, WILLIAM D
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Koninklijke Philips N V
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
516 granted / 619 resolved
+28.4% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
28.9%
-11.1% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A Preliminary Amendment is filed May 16, 2024, that has been entered, and inter alia, amends claims 1, 3-6, 8, 10, 12-17, and cancels claims 7, 9, and 11; leaving claims 1-6, 8, 10, and 12-18 pending for consideration, and examination. Claim Objections Claim 8 is objected to because of the following informalities: typographic mistake, when adding language for the previous amendment, the period after model was not removed. Appropriate correction is required. Claim Interpretation During patent examination, pending claims must be “given their broadest reasonable interpretation consistent with the specification.” MPEP 2111; See also, MPEP 2173.02. Limitations appearing in the specification but not recited in the claim are not read into the claim. In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-551 (CCPA 1969). See also, In re Zletz, 893 F.2d 319, 321-22, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow”). The reason is simply that during patent prosecution when claims can be amended, ambiguities should be recognized, scope and breadth of language explored, and clarification imposed. An essential purpose of patent examination is to fashion claims that are precise, clear, correct, and unambiguous. Only in this way can uncertainties of claim scope be removed, as much as possible, during the administrative process. The Examiner respectfully requests of the Applicant in preparing responses, to consider fully the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Currently claim 5 recites: 5. (Currently Amended) A method as in claim 1, wherein the image is a frame in a video and wherein the method further comprises repeating steps i), ii) and iii) on a sequence of frames in the video; and determining a change in posture or a change in location of the first entity across the sequence of frames. The Examiner does not know what to make of the claim language since method attempts to include the repetition of three steps are not previously introduced, or otherwise discussed. Therefore the claim cannot be examined as applied to prior art at this time. Appropriate correction and/or amendment is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 6, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2017/0239000 A1 to Moctezuma de la Barrera, et al. (hereinafter Moctezuma) in view of U.S. Patent Application Publication No. 2020/0350063 A1 to Thornton et al. (hereinafter Thornton). With regard to claim 1, Moctezuma discloses: 1. (Currently Amended) A computer implemented method for use in monitoring a first entity in a medical facility, the method comprising: obtaining an image of the medical facility (see, detailed description, including, Summary, robotic system having a tracking device operative with the localizer to determine a current placement of the robotic system. Moreover, the method also includes guiding placement of the robotic system in the operating room by displaying representations of the current placement and the desired placement of the robotic system, para. 0008); using a machine learning process to fit a first articulated model to the first entity in the image, wherein the first articulated model comprises keypoints corresponding to joints and affinity fields that indicate links between the keypoints (see, Fig. 5D, and detailed description, including, Generic bone representations are used to generally show proper placement based on distance from the knee joint, for example, or distances from certain anatomical landmarks associated with the knee joint (e.g., distances from patella, tibial tubercle, etc.) para. 0068); and determining a location or posture of the first entity in the medical facility from relative locations of fitted keypoints of the first articulated model in the image, wherein the step of using a machine learning process to fit a first articulated model (see, detailed description, Fig. 8, and including, written instructions on the displays 28, 29 can indicate distances from the anatomical landmarks to each of the trackers 44, 46 (distances may be indicated from landmark to the base of each tracker 44, 46 mounted to bone). A desired distance between trackers 44, 46 (or the bases thereof) may also be numerically and visually depicted on the displays 28, 29; para. 0068; machine learning is interpreted to include the generic bone representations and proper placement, when compared to the indicated landmarks on the patient’s bone structure) to a first entity in the image comprises: using a first deep neural network to determine a first set of locations in the image corresponding to the keypoints in the first articulated model; and using a first graph-fitting process that takes as input the locations in the image corresponding to the keypoints and the affinity fields in the first model to fit the first articulated model to the first entity in the image. Moctezuma fails to explicitly disclose: using a first deep neural network to determine a first set of locations in the image corresponding to the keypoints in the first articulated model (see, detailed description, including, each machine learning model may be trained to generate context assessment based on estimating patient presence using images captured from a respective angle. For example, a first machine learning model is trained to generate context assessment based on estimating patient presence using images captured from a first angle (relative to a floor surface normal) in the medical treatment location (e.g., an overhead straight down view, such as a birds-eye-view), para. 0030); and using a first graph-fitting process that takes as input the locations in the image corresponding to the keypoints and the affinity fields in the first model to fit the first articulated model to the first entity in the image. Thornton discloses: using a first deep neural network to (see, detailed description, the virtual presence cameras (e.g. 205, 206, 207 or 165, 166, 167) located in one or more medical treatment location(s) of the medical environment.0040); para. 0068, determine a first set of locations in the image corresponding to the keypoints in the first articulated model (see, detailed description, including, client application can run on one or more of the disclosed servers to determine context assessment for the medical treatment location based on estimating patient presence using a machine learning model. Remote users can thereby establish the virtual presence in a medical treatment location connected to the LAN 225, para. 0045); and using a first graph-fitting process that takes as input the locations in the image corresponding to the keypoints and the affinity fields in the first model to fit the first articulated model to the first entity in the image (see, detailed description, including, In some cases, depending on the angle of the camera, a given machine learning model of a plurality of machine learning models is selected. Namely, each machine learning model may be trained to generate context assessment based on estimating patient presence using images captured from a respective angle. A processing device, such as user viewing computer 112, receives a video stream from a first camera 165 and determines that the first camera 165 is associated. with the first angle. In response, the processing device provides the video feed from the first camera 165 to the first machine learning model rather than the second machine learning model to obtain the context assessment for the medical treatment location, para. 0030). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, using a first deep neural network to (see, detailed description, the virtual presence cameras (e.g. 205, 206, 207 or 165, 166, 167) located in one or more medical treatment location(s) of the medical environment.0040); para. 0068, determine a first set of locations in the image corresponding to the keypoints in the first articulated model (see, detailed description, including, client application can run on one or more of the disclosed servers to determine context assessment for the medical treatment location based on estimating patient presence using a machine learning model. Remote users can thereby establish the virtual presence in a medical treatment location connected to the LAN 225, para. 0045); and using a first graph-fitting process that takes as input the locations in the image corresponding to the keypoints and the affinity fields in the first model to fit the first articulated model to the first entity in the image (see, detailed description, including, In some cases, depending on the angle of the camera, a given machine learning model of a plurality of machine learning models is selected. Namely, each machine learning model may be trained to generate context assessment based on estimating patient presence using images captured from a respective angle. A processing device, such as user viewing computer 112, receives a video stream from a first camera 165 and determines that the first camera 165 is associated. with the first angle. In response, the processing device provides the video feed from the first camera 165 to the first machine learning model rather than the second machine learning model to obtain the context assessment for the medical treatment location, para. 0030). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.1 With regard to claim 2, Thornton discloses: 2. (Original) A method as in claim 1, wherein the keypoints correspond to position co-ordinates, and wherein the affinity fields correspond to vectors linking the co-ordinates of the relevant keypoints (see, Fig. 3, and detailed description, including, where a deep learning model (or a machine learning model, such as a neural network, linear regression, logistical regression, random forest, gradient boosted trees, support vector machines, decision trees, nearest neighbor, or naïve bayes), such as a deep convolutional neural network (DCNN), para. 0047). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, the keypoints correspond to position co-ordinates, and wherein the affinity fields correspond to vectors linking the co-ordinates of the relevant keypoints (see, Fig. 3, and detailed description, including, where a deep learning model (or a machine learning model, such as a neural network, linear regression, logistical regression, random forest, gradient boosted trees, support vector machines, decision trees, nearest neighbor, or naïve bayes), such as a deep convolutional neural network (DCNN), para. 0047). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.2 With regard to claim 3, Thornton discloses: 3. (Currently Amended) A method as in claim 1, wherein the first articulated model (see, Fig, 3, and detailed description, including, the content from the selected sources to automatically generate a context assessment for the medical treatment location based on estimating presences of a patient at the medical treatment location. The context assessment can include any combination of patient arrival time and departure times at the medical treatment location, video device connection state, para. 0019; and the articulated model is interpreted as the computer-generated model of the patient at the medical treatment center, and as the patient proceeds to be processed and treated within the medical treatment location) is represented as: a tuple of co-ordinates, each coordinate in the tuple of coordinates corresponding to a keypoint (see, detailed description, including, A high-level view of the major stages of a procedure includes: 1. Room Cleaning by environmental services/housekeeping team; 2. Room preparation by scrub and/or circulating nurse (sterile field, surgical instruments, etc.) para. 0020); and a tuple of vectors between different pairs of co-ordinates in the tuple of co- ordinates, each vector corresponding to an affinity field (see, detailed description, including, the milestone data may include: Wheels In to Wheels Out=Procedure time; Wheels In vs Scheduled start time (variance); and Wheels In to Timeout=Patient Prep time and Surgery Start. The milestone data can be used to track procedure time by surgeon/procedure over time and to make recommendations on block time schedule per physician and to compare procedure time by surgeon/procedure to others, para. 0022). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, the keypoints correspond to position co-ordinates, and wherein the affinity fields correspond to vectors linking the co-ordinates of the relevant keypoints (see, Fig. 3, and detailed description, including, where a deep learning model (or a machine learning model, such as a neural network, linear regression, logistical regression, random forest, gradient boosted trees, support vector machines, decision trees, nearest neighbor, or naïve bayes), such as a deep convolutional neural network (DCNN), para. 0047). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.3 With regard to claim 4, Thornton discloses: 4. (Currently Amended) A method as in claim 1, wherein the machine learning process comprises use of a neural network (see, Fig. 3, and detailed description, including, illustrates an exemplary flow diagram for deep learning, where a deep learning model (or a machine learning model, such as a neural network, linear regression, logistical regression, random forest, gradient boosted trees, support vector machines, decision trees, nearest neighbor, or naïve bayes), such as a deep convolutional neural network (DCNN), can be trained and used to determine presence of patient in a medical treatment location or to distinguish between patient and non-patient personnel in the medical treatment location, para. 0047). wherein the machine learning process comprises use of a neural network (see, Fig. 3, and detailed description, including, illustrates an exemplary flow diagram for deep learning, where a deep learning model (or a machine learning model, such as a neural network, linear regression, logistical regression, random forest, gradient boosted trees, support vector machines, decision trees, nearest neighbor, or naïve bayes), such as a deep convolutional neural network (DCNN), can be trained and used to determine presence of patient in a medical treatment location or to distinguish between patient and non-patient personnel in the medical treatment location, para. 0047). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, the keypoints correspond to position co-ordinates, and wherein the affinity fields correspond to vectors linking the co-ordinates of the relevant keypoints (see, Fig. 3, and detailed description, including, where a deep learning model (or a machine learning model, such as a neural network, linear regression, logistical regression, random forest, gradient boosted trees, support vector machines, decision trees, nearest neighbor, or naïve bayes), such as a deep convolutional neural network (DCNN), para. 0047). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.4 With regard to claim 6, Thornton discloses: 6. (Currently Amended) A method as claim 1, wherein the location or posture is used to determine whether an event has occurred with respect to the first entity, wherein: the first entity is a person and wherein the event is: the person exiting a bed; the person having a seizure; or the person remaining in one position for longer than a predefined time threshold; or wherein: the first entity is a piece of medical equipment and wherein the event is, see detailed description, including, a client application on a computing device of a remote user or remote viewer of the network 294 is employed to select a display within the room of the medical environment, after the virtual view has been provided to the remote user on the network 294, para. 0045); the piece of medical equipment being moved from a first location to a second location; the piece of equipment being attached to a patient; or the piece of equipment being used to perform a medical procedure on a patient. It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, the first entity is a piece of medical equipment and wherein the event is, see detailed description, including, a client application on a computing device of a remote user or remote viewer of the network 294 is employed to select a display within the room of the medical environment, after the virtual view has been provided to the remote user on the network 294, para. 0045). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.5 7. (Cancelled) 9. (Cancelled) 11. (Cancelled) With regard to claim 14, Thornton discloses: 14. (Currently Amended) A method as in of claim 1, wherein the location or posture of the first entity is used to determine whether an item in a clinical workflow has been performed (see, detailed description, including, the ‘alert’ 602 notifies the surgeon of a room state change. The photographs provide a view 603 of the room to allow the surgeon to visually evaluate room readiness. In addition, if the surgeon feels the need to communicate directly with the staff, the surgeon may initiate a live 2-way collaboration session 604 with the staff in the room with a single button click, para. 0066); and updating the workflow with the result of the determination (see, Fig. 6, and detailed description, including, 0065-0066). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, wherein the location or posture of the first entity is used to determine whether an item in a clinical workflow has been performed (see, detailed description, including, the ‘alert’ 602 notifies the surgeon of a room state change. The photographs provide a view 603 of the room to allow the surgeon to visually evaluate room readiness. In addition, if the surgeon feels the need to communicate directly with the staff, the surgeon may initiate a live 2-way collaboration session 604 with the staff in the room with a single button click, para. 0066); and updating the workflow with the result of the determination (see, Fig. 6, and detailed description, including, 0065-0066). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.6 With regard to claim 15, Thornton discloses: 15. (Currently Amended) A method as in claim 1, wherein the method is triggered by an item in a clinical workflow and wherein the location or posture of the first entity is used to determine whether the item has been performed (see, detailed description, including, he mainstream system 110 may generate a context assessment for the medical treatment location (e.g., determine that the patient is ready for a medical procedure). In such cases, the mainstream system 110 processes EMR data associated with the medical treatment location to identify the physician assigned to perform the medical procedure at the medical treatment location at the current time, para. 0066); and updating the workflow with the result of the determination (see, detailed description, including, The mainstream system 110 sends a notification which is presented to the physician as an alert 602. The notification may inform the physician of the status of the medical treatment location and may include the generated context assessment. The physician may select the notification and be presented with interface view 603 listing various medical treatment locations, para. 0066). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, wherein the method is triggered by an item in a clinical workflow and wherein the location or posture of the first entity is used to determine whether the item has been performed (see, detailed description, including, he mainstream system 110 may generate a context assessment for the medical treatment location (e.g., determine that the patient is ready for a medical procedure). In such cases, the mainstream system 110 processes EMR data associated with the medical treatment location to identify the physician assigned to perform the medical procedure at the medical treatment location at the current time, para. 0066); and updating the workflow with the result of the determination (see, detailed description, including, The mainstream system 110 sends a notification which is presented to the physician as an alert 602. The notification may inform the physician of the status of the medical treatment location and may include the generated context assessment. The physician may select the notification and be presented with interface view 603 listing various medical treatment locations, para. 0066). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.7 With regard to claim 16, claim 16 (a method claim) recites substantially similar limitations to claim 1 (a device/apparatus claim) and is therefore rejected using the same art and rationale set forth above (see, Moctezuma with the addition, of a Navigation computer 26 has the displays 28, 29, central processing unit (CPU) and/or other processors, memory, para. 0034). With regard to claim 17, claim 17 (an apparatus claim) recites substantially similar limitations to claim 1 (a device/apparatus claim) and is therefore rejected using the same art and rationale set forth above (with the addition of a memory, and a processor, see Moctezuma a Navigation computer 26 has the displays 28, 29, central processing unit (CPU) and/or other processors, memory, para. 0034). With regard to claim 18, Thornton discloses: 18. (Original) An apparatus as in claim 17 further comprising: an image acquisition unit for obtaining the image (see, detailed description, including, Using well known navigation techniques for registration and coordinate system transformation, the patient's anatomy and the working end of the surgical instrument 22 can be registered into a coordinate reference frame of the localizer 34 so that the working end and the anatomy can be tracked together using the LEDs 50, para. 0044); and/or a time of flight camera to obtain image depth information for the fitted keypoints of the entity in the image. It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Moctezuma and Thornton before her, to be motivated to combine the features from Thornton, with Moctezuma, including, Using well known navigation techniques for registration and coordinate system transformation, the patient's anatomy and the working end of the surgical instrument 22 can be registered into a coordinate reference frame of the localizer 34 so that the working end and the anatomy can be tracked together using the LEDs 50, para. 0044); and/or a time of flight camera to obtain image depth information for the fitted keypoints of the entity in the image. Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art.8 Allowable Subject Matter Claims 8, 10, 12, 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 8 should also be corrected in light of the minor objection raised above. For convenience claim 8 is presented below: 8. (Currently Amended) A method as in claim 1 further comprising: using the machine learning process to fit a second articulated model to a second entity in the image, wherein the second articulated model comprises keypoints corresponding to joints and affinity fields that indicate links between the keypoints; and determining an interaction between the first entity and the second entity in the image from relative locations of fitted keypoints of the first articulated model and fitted keypoints of the second articulated model. determining depth information associated with fitted keypoints in the first articulated model and fitted keypoints in the second articulated model; and wherein the step of determining an interaction between the first entity and the second entity in the image is further based on the depth information. A sampling of the prior art made of record and not relied upon and considered pertinent to Applicants’ disclosure includes: U.S. Patent Application Publication No. 2022/0249014 A1 to Geiger et al. that discusses Systems and methods for an intuitive display of one or more anatomical objects are provided. One or more 3D medical images of one or more anatomical objects of a patient are received. Correspondences between the one or more 3D medical images and points on a 2D map representing the one or more anatomical objects are determined. The 2D map is updated with patient information extracted from the one or more 3D medical images. The updated 2D map with the determined correspondences is output. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM D. TITCOMB whose telephone number is (571)270-5190. The examiner can normally be reached 9:30 AM - 6:30 PM (M-F). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen C. Hong can be reached at 571-272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. WILLIAM D. TITCOMB Primary Examiner Art Unit 2178 /WILLIAM D TITCOMB/ Primary Examiner, Art Unit 2178 2-20-2026 1 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 2 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 3 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 4 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 5 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 6 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 7 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007). 8 KSR International Co. v. Teleflex Inc., 127 S.Ct. 1727, 82 U.S.P.Q.2d 1385 (2007).
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604055
Auto-reframing and multi-cam functions of video editing application
2y 5m to grant Granted Apr 14, 2026
Patent 12591441
DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12591442
DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12579647
EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12573231
CONTROLLING ROLLABLE DISPLAY DEVICES BASED ON FINGERPRINT INFORMATION AND TOUCH INFORMATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+14.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month