Prosecution Insights
Last updated: April 19, 2026
Application No. 18/540,645

SYSTEMS, DEVICES, AND METHODS FOR IDENTIFYING AND LOCATING A REGION OF INTEREST

Non-Final OA §103
Filed
Dec 14, 2023
Examiner
RODIN, MARIO ANTHONY
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Mazor Robotics Ltd.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
6 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§103
80.0%
+40.0% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 22-31, 33-37, 39, and 40 objected to because of the following informalities: Typographical errors reading “The device of claim 1”, “The device of claim 9”, “The device of claim 12”, “The device of claim 13”, and “The device of claim 18”, should read “The device of claim 21”, “The device of claim 29”, “The device of claim 32”, “The device of claim 33”, and “The device of claim 38”, respectively. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 21, 23-28, 32, and 35-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nye and Rao of US 2020/0372651 (hereinafter referred to as “Nye”) in view of Gregerson et al. of CN 111417354A (hereinafter referred to as “Gregerson”). Nye discloses a device for identifying a region of interest comprising: at least one processor, and a memory storing data for processing by the processor (see Nye par. 0016, where memory, processor, and all computer components are explicitly expressed), the data, when processed, causing the processor to: receive an image, the image depicting a patient (see Nye par. 0045 “the configuration can receive the images” and Nye par. 0177 “certain examples facilitate image acquisition and analysis… at the point of patient imaging); input the image to a region of interest model, wherein the region of interest model is trained using historical data (see Nye paras. 0108-0111 “contextual patient information can be used to improve accuracy of an artificial intelligence algorithm model, for example”, “For example… chest x-ray pneumothorax AI detection algorithm… may not have the patient’s prior chest x-ray… Therefore, providing the AI algorithm with prior imaging exams, would be necessary to determine whether the pneumothorax finding shall be considered critical or not”), and wherein the region of interest model is configured to identify a region of interest at least in part by processing the image and determining a pose of the region of interest; receive an identified region of interest and a pose of the identified region of interest from the region of interest model (see Nye par. 0100 “the image data can be conditioned for processing by machine learning, such as a deep learning network, etc., to identify one or more features of interest in the image data”, par. 0101 “the pre-processed image data is provided to the learning network for processing of the image data to identify one or more clinical/critical findings”, and par. 0129 “the image data is analyzed to determine whether the image data matches a position and region indicated by the metadata. For example, if the … metadata indicates that the image is a frontal … chest image, then an analysis of the image data should confirm that position (e.g., location and orientation, etc.)”). Nye does not disclose automatically providing instructions to a controller to adjust a pose of a surgical instrument relative to the pose of the identified region of interest. However, Gregerson discloses automatically providing instructions to a controller to adjust a pose of a surgical instrument relative to the pose of the identified region of interest (see Gregerson pg. 10 par. 3 “the robotic arm … may also operate in an autonomous mode, wherein the robotic arm … moves to a particular pose in response to control signals from the robotic control system … (e.g., according to a robotic motion planning algorithm…)”). It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing autonomous robotic arm of Gregerson to the existing device for identifying a region of interest of Nye because it is predictable that this would allow the device for identifying a region of interest to operate physically rather than purely digitally and have an operable physical component to enact the instructions that the digital model computes. Regarding claim 23, Nye in view of Gregerson discloses the device of claim 21, wherein processing the image uses feature recognition to identify the region of interest (see Nye par. 0073 and Fig. 4 “FIG. 4 illustrates a particular implementation of the example neural network as a convolutional neural network… More specifically, … a convolution … is applied to a portion or window … of the input … in the first layer to provide a feature map.”). Regarding claim 24, Nye in view of Gregerson discloses the device of claim 21, wherein the surgical instrument comprises a robotic arm configured to orient an imaging device, and wherein the memory stores further data for processing by the processor that, when processed, causes the processor to provide instructions to a controller to adjust a pose of the robotic arm to frame all of the identified region of interest within a field of view of the imaging device (see Gregerson pg. 6 “a first image dataset of an anatomical structure of a patient may be obtained using an imaging device, such as the imaging device 103 shown in fig. 1 and 2. The first image dataset may be a three-dimensional dataset … representing at least a portion of a patient’s anatomy including an internal anatomy and/or structure … on which surgery is to be performed” and as shown in Fig. 1, the robotic arm 101 is connected to the imaging device 103, in which the imaging device is the end-effector of the robotic arm in this embodiment. In order to obtain these images, the robotic arm must move the imaging device to frame all of the identified region of interest within the field of view of the image frame.). Regarding claim 25, Nye discloses the device of claim 21, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: receive an updated image, the updated image depicting the patient, input the updated image to the region of interest model, and receive at least one of an updated identified region of interest and an updated pose of the updated identified region of interest (see Nye par. 0102 “The images can be pre-processed in real time based on acquisition conditions that generated the image to improve accuracy and efficacy of the inference process. In certain examples, the learning network(s) are trained, updated, and redeployed continuously and/or periodically upon acquisition of additional curated data.”). Gregerson discloses automatically generating updated instructions for the controller to adjust the pose of the surgical instrument relative to the pose of the updated identified region of interest (see Gregerson pg. 8 “The system 400 may be configured to repeatedly read tracking data from the motion tracking system 105 that indicates the current position/orientation of the patient and any other objects tracked by the motion tracking system 105 … In an embodiment, the tracking data from the motion tracking system may include data that enables the system to identify a particular object from within the tracking data.”). Regarding claim 26, Nye discloses the device of claim 21, wherein the region of interest model is further trained using intraoperative patient data (see Nye par. 0102 “The images can be pre-processed in real time based on acquisition conditions that generated the image to improve accuracy and efficacy of the inference process. In certain examples, the learning network(s) are trained, updated, and redeployed continuously and/or periodically upon acquisition of additional curated data”). Regarding claim 27, Nye discloses the device of claim 21, wherein the historical data comprises a plurality of images, at least some of the images depicting a region of interest similar to the identified region of interest (see Nye par. 0110 ““For example… chest x-ray pneumothorax AI detection algorithm… may not have the patient’s prior chest x-ray… Therefore, providing the AI algorithm with prior imaging exams, would be necessary to determine whether the pneumothorax finding shall be considered critical or not” which indicates that the AI detection algorithm (region of interest model) is being fed multiple images which depict a region of interest such as the lungs to detect pneumothorax). Regarding claim 28, Gregerson discloses the device of claim 21, wherein the region of interest comprises a spinal region (see Gregerson pg. 16 “In embodiments, the anatomical feature within the patient’s body may comprise a bone or skeletal feature, such as at least a portion of the patient’s spine”). Regarding claim 31, Gregerson discloses the device of claim 21, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: identify a second region of interest, wherein the first region of interest is identified at a time stamp prior to a time stamp of the second region of interest, and the first region of interest has a pose different from a pose of the second region of interest (see Gregerson pg. 6 “Computer-assisted surgery techniques typically utilize a process of associating a data set representing a portion of a patient’s anatomy to be operated on with the patient’s position at the time of the surgical treatment. The position of the patient may be determined based on a second image data set, which may include real-time camera images from the motion tracking system…” which indicates that a region of interest is identified multiple times during the course of a surgery in real-time, including a second region of interest. Time stamps are a known piece of art as attached below in this office action that would have been obvious to add to the real-time images.). Regarding claim 32, Nye discloses a device for identifying a region of interest comprising: at least one processor, and a memory storing data for processing by the processor (see Nye par. 0016, where memory, processor, and all computer components are explicitly expressed), the data, when processed, causing the processor to: receive an image, the image depicting a patient (see Nye par. 0045 “the configuration can receive the images” and Nye par. 0177 “certain examples facilitate image acquisition and analysis… at the point of patient imaging); input the image to a region of interest model, wherein the region of interest model is trained using historical data (see Nye paras. 0108-0111 “contextual patient information can be used to improve accuracy of an artificial intelligence algorithm model, for example”, “For example… chest x-ray pneumothorax AI detection algorithm… may not have the patient’s prior chest x-ray… Therefore, providing the AI algorithm with prior imaging exams, would be necessary to determine whether the pneumothorax finding shall be considered critical or not”) and patient data received intraoperatively (see Nye par. 0102 “The images can be pre-processed in real time based on acquisition conditions that generated the image to improve accuracy and efficacy of the inference process. In certain examples, the learning network(s) are trained, updated, and redeployed continuously and/or periodically upon acquisition of additional curated data”), and wherein the region of interest model is configured to identify a region of interest at least in part by processing the image and determining a pose of the region of interest; receive an identified region of interest and a pose of the identified region of interest from the region of interest model (see Nye par. 0100 “the image data can be conditioned for processing by machine learning, such as a deep learning network, etc., to identify one or more features of interest in the image data”, par. 0101 “the pre-processed image data is provided to the learning network for processing of the image data to identify one or more clinical/critical findings”, and par. 0129 “the image data is analyzed to determine whether the image data matches a position and region indicated by the metadata. For example, if the … metadata indicates that the image is a frontal … chest image, then an analysis of the image data should confirm that position (e.g., location and orientation, etc.)”). Gregerson discloses automatically providing instructions to a controller to adjust a pose of a robotic arm relative to the pose of the identified region of interest (see Gregerson pg. 10 par. 3 “the robotic arm … may also operate in an autonomous mode, wherein the robotic arm … moves to a particular pose in response to control signals from the robotic control system … (e.g., according to a robotic motion planning algorithm…)”). Regarding claim 35, Gregerson discloses the device of claim 32, wherein the robotic arm orients an imaging device, and wherein the instructions cause the controller to move the robotic arm to orient the imaging device to frame all of the identified region of interest within a field of view of the imaging device (see Gregerson pg. 6 “a first image dataset of an anatomical structure of a patient may be obtained using an imaging device, such as the imaging device 103 shown in fig. 1 and 2. The first image dataset may be a three-dimensional dataset … representing at least a portion of a patient’s anatomy including an internal anatomy and/or structure … on which surgery is to be performed” and as shown in Fig. 1, the robotic arm 101 is connected to the imaging device 103, in which the imaging device is the end-effector of the robotic arm in this embodiment. In order to obtain these images, the robotic arm must move the imaging device to frame all of the identified region of interest within the field of view of the image frame.). Regarding claim 36, Gregerson discloses the device of claim 32, wherein the region of interest comprises a spinal region (see Gregerson pg. 16 “In embodiments, the anatomical feature within the patient’s body may comprise a bone or skeletal feature, such as at least a portion of the patient’s spine”). Regarding claim 37, Gregerson discloses the device of claim 32, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: identify a second region of interest, wherein the first region of interest is identified at a time stamp prior to a time stamp of the second region of interest, and the first region of interest has a pose different from a pose of the second region of interest (see Gregerson pg. 6 “Computer-assisted surgery techniques typically utilize a process of associating a data set representing a portion of a patient’s anatomy to be operated on with the patient’s position at the time of the surgical treatment. The position of the patient may be determined based on a second image data set, which may include real-time camera images from the motion tracking system…” which indicates that a region of interest is identified multiple times during the course of a surgery in real-time, including a second region of interest. Time stamps are a known piece of art as attached below in this office action that would have been obvious to add to the real-time images.). Claim(s) 22, 29, 30, 33, and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nye in view of Gregerson as applied to claims 21, 23-28, 32, and 35-37 above, and further in view of Armand et al of US 2017/0000504 (hereinafter referred to as “Armand”). Regarding claim 22, Nye and Gregerson fail to disclose wherein the image also depicts one or more trackers, and wherein the pose of the identified region of interest is determined relative to the one or more trackers. However, Armand discloses wherein the image also depicts one or more trackers, and wherein the pose of the identified region of interest is determined relative to the one or more trackers (see Armand par. 0015 “transforming a set of cephalometric landmarks from a donor skeletal fragment in a first reference frame to a second reference frame … tracking movement, based on one or more signals from one or more sensors, during surgery, of one or more of the cephalometric landmarks in the set of cephalometric landmarks associated with the donor skeletal fragment” indicating that the landmarks give off signals from a sensor, further implying that the sensor (or tracker) would be in a reference frame of the image taken). It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing robotic arm of Gregerson and device for identifying a region of interest of Nye to the trackers of Armand because it is predictable that doing so would add physical location detection devices that provide detailed location information of the landmarks to the robotic arm and the device for identifying a region of interest that allow it to be more efficient and accurate in performing its functions. Regarding claim 29, Nye and Gregerson fail to disclose wherein processing the image comprises identifying a plurality of skeletal landmarks. However, Armand discloses wherein processing the image comprises identifying a plurality of skeletal landmarks (see Armand par. 0019 “The registration process can farther comprise identifying, by a processor, a set of anatomical landmarks on the donor skeletal fragment and the recipient skeletal portion from the previously acquired medical imaging scans; and creating, by a processor, a point-to-surface registration between the set of anatomical landmarks associated with the donor skeletal fragment and a surface model” (or the recipient skeletal, par. 0018) “of the recipient skeletal using an iterative closest point algorithm”). It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing robotic arm of Gregerson and device for identifying a region of interest of Nye to the device for identifying skeletal landmarks of Armand because it is predictable that doing so would increase accuracy of the device for identifying a region of interest by using skeletal landmarks. Claim 33 is rejected under the same analysis as claim 29 above. Regarding claim 30, Nye and Gregerson fail to disclose wherein the pose of the region of interest is determined relative to the plurality of skeletal landmarks. However, Armand discloses wherein the pose of the region of interest is determined relative to the plurality of skeletal landmarks (see Armand par. 0021, “The tracking can comprise determining, during surgery, the orientation of the donor skeletal fragment with respect to the recipient skeletal portion by measuring one or more cephalometric landmarks from the set of cephalometric landmarks using a tracking device” with “orientation” indicating pose). It would have been obvious for one of ordinary skill in the art before the effective filing date to add the device for determining pose based on skeletal landmarks of Armand to the existing robotic arm of Gregerson and device for identifying a region of interest of Nye because it is predictable that doing so would increase the accuracy of the pose of the region of interest that the device determines. Claim 34 is rejected under the same analysis as claim 30 above. Claim(s) 38-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nye in view of Gregerson, and further in view of Armand. Regarding claim 38, Nye discloses a device for identifying a region of interest comprising: at least one processor, and a memory storing data for processing by the processor (see Nye par. 0016, where memory, processor, and all computer components are explicitly expressed), the data, when processed, causing the processor to: receive an image, the image depicting a patient (see Nye par. 0045 “the configuration can receive the images” and Nye par. 0177 “certain examples facilitate image acquisition and analysis… at the point of patient imaging); input the image to a region of interest model, wherein the region of interest model is trained using historical data (see Nye paras. 0108-0111 “contextual patient information can be used to improve accuracy of an artificial intelligence algorithm model, for example”, “For example… chest x-ray pneumothorax AI detection algorithm… may not have the patient’s prior chest x-ray… Therefore, providing the AI algorithm with prior imaging exams, would be necessary to determine whether the pneumothorax finding shall be considered critical or not”) and patient data received intraoperatively (see Nye par. 0102 “The images can be pre-processed in real time based on acquisition conditions that generated the image to improve accuracy and efficacy of the inference process. In certain examples, the learning network(s) are trained, updated, and redeployed continuously and/or periodically upon acquisition of additional curated data”), and receive an identified region of interest and a pose of the identified region of interest from the region of interest model (see Nye par. 0100 “the image data can be conditioned for processing by machine learning, such as a deep learning network, etc., to identify one or more features of interest in the image data”, par. 0101 “the pre-processed image data is provided to the learning network for processing of the image data to identify one or more clinical/critical findings”, and par. 0129 “the image data is analyzed to determine whether the image data matches a position and region indicated by the metadata. For example, if the … metadata indicates that the image is a frontal … chest image, then an analysis of the image data should confirm that position (e.g., location and orientation, etc.)”). Nye does not explicitly disclose wherein the region of interest model is configured to identify a region of interest at least in part by processing the image to identify a plurality of skeletal landmarks and determining a pose of the region of interest based on the plurality of skeletal landmarks, automatically generating instructions for a controller to adjust a pose of a robotic arm relative to the pose of the identified region of interest, tracking the plurality of skeletal landmarks for movement, and updating the region of interest when movement of the plurality of skeletal landmarks is detected. Gregerson discloses automatically providing instructions to a controller to adjust a pose of a robotic arm relative to the pose of the identified region of interest (see Gregerson pg. 10 par. 3 “the robotic arm … may also operate in an autonomous mode, wherein the robotic arm … moves to a particular pose in response to control signals from the robotic control system … (e.g., according to a robotic motion planning algorithm…)”). It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing autonomous robotic arm of Gregerson to the existing device for identifying a region of interest of Nye because it is predictable that this would allow the device for identifying a region of interest to operate physically rather than purely digitally and have an operable physical component to enact the instructions that the digital model computes. Furthermore, Armand discloses wherein the region of interest model is configured to identify a region of interest at least in part by processing the image to identify a plurality of skeletal landmarks (see Armand par. 0019 “The registration process can farther comprise identifying, by a processor, a set of anatomical landmarks on the donor skeletal fragment and the recipient skeletal portion from the previously acquired medical imaging scans; and creating, by a processor, a point-to-surface registration between the set of anatomical landmarks associated with the donor skeletal fragment and a surface model” (or the recipient skeletal, par. 0018) “of the recipient skeletal using an iterative closest point algorithm”) and determining a pose of the region of interest based on the plurality of skeletal landmarks, tracking the plurality of skeletal landmarks for movement (see Armand par. 0021, “The tracking can comprise determining, during surgery, the orientation of the donor skeletal fragment with respect to the recipient skeletal portion by measuring one or more cephalometric landmarks from the set of cephalometric landmarks using a tracking device” with “orientation” indicating pose) and updating the region of interest when movement of the plurality of skeletal landmarks is detected (see Armand par. 0026 “The updates can comprise a change in appearance of a visual indicator on a hybrid model as the donor skeletal fragment is mated with the recipient skeletal fragment.”). It would have been obvious for one of ordinary skill in the art before the effective filing date to add the device for identifying and tracking skeletal landmarks of Armand to the existing robotic arm of Gregerson and device for identifying a region of interest of Nye because it is predictable that doing so would increase accuracy of the device for identifying a region of interest and allow the device for identifying a region of interest to operate intraoperatively, which improves usefulness. Claim 39 is rejected under the same analysis as claim 35. Claim 40 is rejected under the same analysis as claim 36. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 5347653 A (Column 2 Line 59) Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIO A. RODIN whose telephone number is (571) 272-8003. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIO ANTHONY RODIN/ Examiner, Art Unit 2675 /ANDREW M MOYER/ Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
Feb 08, 2024
Response after Non-Final Action
Jan 22, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month