Prosecution Insights
Last updated: April 19, 2026
Application No. 17/913,770

SYSTEMS AND METHODS FOR REGISTERING AN INSTRUMENT TO AN IMAGE USING POINT CLOUD DATA AND ENDOSCOPIC IMAGE DATA

Non-Final OA §103
Filed
Sep 22, 2022
Examiner
BURKE, TIONNA M
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Intuitive Surgical Operations, Inc.
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
233 granted / 431 resolved
-0.9% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
46 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 431 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s Response In Applicant’s Response dated 12/9/25, the Applicant amended Claims 1, 20 and argued Claims previously rejected in the Office Action dated 9/30/25. Claims 1-3, 5-20 and 32 are pending examination. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/9/25 has been entered. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/23/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-8, 17-20 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Ganatra et al., United States Patent Publication 2009/0227861 (hereinafter “Ganatra”), in view of Glossop, United States Patent Publication 20070055128. Claim 1: Ganatra discloses: A medical instrument system for use in an image-guided medical procedure (see paragraph [0017]). Ganatra discloses a medical instrument used in an image-guided medical procedure, the system comprising: a positional sensor configured to generate positional sensor data associated with one or more positions of a biomedical device within an anatomic region of a patient (see paragraphs [0018]-[0020]). Ganatra teaches a sensor that generates location information of the device within an anatomic region; an image capture device configured to capture first image data of patient anatomy within the anatomic region while the biomedical device is positioned within the anatomic region (see paragraphs [0018] and [0019])).Ganatra teaches a bronchoscope configured to acquire medical images of the subject and transmit the images; a processor communicatively coupled to the positional sensor and the image capture device (see paragraph [0024]). Ganatra teaches a processor coupled to sensor and image device; and a memory storing instructions that, when executed by the processor, cause the system to perform operations (see paragraph [0024]) comprising: generating a point cloud of coordinate points based, at least in part, on the positional sensor data (see paragraphs [0020]-[0022]). Ganatra teaches generating a storage of points based on the location data, receiving second image data of the anatomic region, wherein the second image data is generated based, at least in part, on imaging of the anatomic region (see paragraph [0022]). Ganatra teaches receiving second image data from the bronchoscope, adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region (see paragraphs [0020] and [0026]). Ganatra teaches adding coordinates at locations corresponding to the device and the points in the model that coincide with the image. generating a registration between at least a portion of the point cloud and at least a portion of the second image data (see paragraphs [0022]). Ganatra teaches generating registration between the location points and the images from the device, and Ganatra fails to expressly disclose generating additional points based on data from the first image data. Glossop discloses: adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region associated with the first image data (see paragraphs [0028]-[0030] and [0035]-[0038]). Glossop teaches adding point at multiple locations corresponding to interoperative images and pre-operative images. updating the registration based, at least in part, on the first image data (see paragraphs [0038]). Glossop teaches updating registration with the new locations based on the pre-operative image data. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ganatra to include adding coordinate points at one or more locations located with the image capture device associated with the first image data for the purpose of enabling the shape of a flexible endoscope or other instrument to be determined without the use of additional imaging, as taught by Glossop. Claim 2: Ganatra fails to expressly disclose generating additional points based on data from the first image data. Glossop discloses: wherein the operations further comprise generating one or more correspondences by matching patient anatomy in one or more images of the first image data with patient anatomy of the anatomic region in the portion of the second image data (see paragraphs [0028]-[0030] and [0035]-[0038]). Glossop teaches adding point at multiple locations corresponding to interoperative images and pre-operative images. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ganatra to include adding coordinate points at one or more locations located with the image capture device associated with the first image data for the purpose of enabling the shape of a flexible endoscope or other instrument to be determined without the use of additional imaging, as taught by Glossop. Claim 3: Ganatra discloses: wherein the patient anatomy in the one or more images of the first image data and the patient anatomy of the anatomic region in the portion of the second image data are one or more branching points of anatomic passageways in the anatomic region (see paragraph [0030]). Ganatra teaches generating a correspondence between the ct image and the image of the bronchoscope showing the branching points and passageways. Claim 6: Ganatra discloses: wherein the portion of the point cloud includes only the one or more added coordinate points (see paragraph [0027]). Ganatra teaches storing the added coordinates in a group of collected measurements. Claim 7: Yu discloses: determining a transformation to align an image of the one or more images of the first image data with corresponding patient anatomy of the anatomic region in the portion of the second image data, and wherein generating the registration includes generating the registration based, at least in part, on the transformation (see paragraph [0020]). Ganatra teaches may be used to find a mathematical transformation, for example, an affine transformation, which relates the two frames of reference to one another, thereby registering tracking coordinate system with model coordinate system. Claim 8: Ganatra discloses: wherein the operations further comprise determining, based, at least in part, on the first image data, at least a portion of a pathway taken by the biomedical device throughout the anatomic region, and wherein generating the registration includes generating the registration between at least the portion of the point cloud and a section of the anatomic region corresponding to the portion of the pathway (see paragraphs [0031]). Ganatra teaches the operator advances bronchoscope in different directions, along a series of bronchial branches, while tracking system collects positional information for sensor X, which positional information is received by processor to calculate trajectories of sensor X for comparison with characteristic contours of the subsets of predetermined points that define portions, or branches, of the pathway of model stored in database. Sets of coordinates for locations of tracking sensor X, that are along trajectories matched to the subsets of predetermined points, are identified and used with the corresponding predetermined points to find a mathematical transformation, which relates the two frames of reference to one another and thereby register tracking coordinate system with model coordinate system. Claim 17: Ganatra discloses: wherein the operations further comprise: determining when a current position or orientation of the biomedical device has changed by a threshold amount (see paragraph [0026]). Ganatra teaches determining when the current position or orientation of the device change based on the calculated distances; and generating, in response to the determination, a correspondence by matching patient anatomy in an image of the first image data with patient anatomy in the portion of the anatomic region in the second image data (see paragraph [0026). Ganatra teaches generating a correspondence by matching the images of the patient. Claim 18: Ganatra discloses: wherein the operations further comprise: determining, based, at least in part, on the generated registration, when the biomedical device is positioned at first patient anatomy within the anatomic region (see paragraph [0026]). Ganatra teaches determining based on the registration when the device is positioned at a first anatomy; and Ganatra fails to expressly disclose generating additional points based on data from the first image data. Glossop discloses: generating, in response to the determination, a correspondence by matching the first patient anatomy in an image of the first image data with the first patient anatomy in the portion of the anatomic region in the second image data (see paragraphs [0028]-[0030] and [0035]-[0038]). Glossop teaches adding point at multiple locations corresponding to interoperative images and pre-operative images. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ganatra to include adding coordinate points at one or more locations located with the image capture device associated with the first image data for the purpose of enabling the shape of a flexible endoscope or other instrument to be determined without the use of additional imaging, as taught by Glossop. Claim 19: Ganatra discloses: wherein the operations farther comprise: determining when the biomedical device is subject to commanded movement through anatomic passageways of the anatomic region (see paragraphs [0022]-[0023]). Ganatra teaches determining when the user uses the user interface commands a path though the passageways; and Ganatra fails to expressly disclose generating additional points based on data from the first image data. Glossop discloses: in response to the determination, generating and/or updating the registration (see paragraphs [0038]). Glossop teaches updating registration with the new locations based on the pre-operative image data. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ganatra to include adding coordinate points at one or more locations located with the image capture device associated with the first image data for the purpose of enabling the shape of a flexible endoscope or other instrument to be determined without the use of additional imaging, as taught by Glossop. Claim 20: A non-transitory, computer-readable medium storing instructions thereon that, when executed by one or more processors of a computing system (see paragraph [0024]). Ganatra teaches a processor coupled to sensor and image device, cause the computing system to perform operations comprising: generating a point cloud of coordinate points based, at least in part, on positional sensor data captured using a position sensor, wherein the positional sensor data is associated with one or more positions of a biomedical device within an anatomic region of a patient (see paragraphs [0020]-[0022]). Ganatra teaches generating a storage of points based on the location data of the biomedical device within an anatomic region of a patient; receiving first image data of patient anatomy captured using an image capture device positioned within the anatomic region (see paragraphs [0018] and [0019])).Ganatra teaches a bronchoscope configured to acquire medical images of the subject and transmit the images; receiving second image data of the anatomic region, wherein the second image data is generated based, at least in part, on preoperative or intraoperative imaging of the anatomic region (see paragraph [0022]). Ganatra teaches receiving second image data from the bronchoscope; adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region (see paragraphs [0020] and [0026]). Ganatra teaches adding coordinates at locations corresponding to the device and the points in the model that coincide with the image. generating a registration between at least a portion of the point cloud with at least a portion of the second image data (see paragraphs [0022]). Ganatra teaches generating registration between the location points and the images from the device; and Ganatra fails to expressly disclose generating additional points based on data from the first image data. Glossop discloses: adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region associated with the first image data (see paragraphs [0028]-[0030] and [0035]-[0038]). Glossop teaches adding point at multiple locations corresponding to interoperative images and pre-operative images. updating the registration based, at least in part, on the first image data (see paragraphs [0038]). Glossop teaches updating registration with the new locations based on the pre-operative image data. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ganatra to include adding coordinate points at one or more locations located with the image capture device associated with the first image data for the purpose of enabling the shape of a flexible endoscope or other instrument to be determined without the use of additional imaging, as taught by Glossop. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Ganatra and Glossop, in view of Polidor et al., United States 20170010087 (hereinafter “Polidor”). Claim 5: Ganatra and Glossop fail to expressly disclose weighing added coordinates points differently than point cloud coordinates from sensor. Polidor discloses: includes weighting the one or more added coordinate points differently than other coordinate points of the point cloud generated from the positional sensor data (see paragraphs [0036]-[0038]). Polidor teaches weighting added points from the sensor differently than other known data points. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to modify the method disclosed by Ganatra and Glossop to include weighing added coordinates points differently than point cloud coordinates from sensor for the purpose of efficiently having accurate data points, as taught by Polidor. Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Ganatra and Glossop, in view of Akimoto et al., United States 2015/0057498 (hereinafter “Akimoto”). Claim 9: Ganatra and Glossop fail to expressly disclose estimating registration errors. Akimoto discloses: wherein the operations further comprise estimating a registration error between a correspondence of the one or more correspondences and the generated registration (see paragraphs [0067], [0087] and [0088]). Akimoto estimating a registration error of the correspondences. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to modify the method disclosed by Ganatra and Glossop to include estimating a registration error for the purpose of efficiently matching image during registration, as taught by Akimoto. Claim 10: Ganatra and Glossop fail to expressly disclose estimating registration errors. Akimoto discloses: wherein the operations further comprise coloring a display of the generated registration based, at least in part, on a magnitude of the estimated registration error (see paragraphs [0177]). Akimoto teaches displaying a visual distinctive feature based on the estimated error. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to modify the method disclosed by Ganatra and Glossop to include estimating a registration error for the purpose of efficiently matching image during registration, as taught by Akimoto. Claim 11: Ganatra and Glossop fail to expressly disclose estimating registration errors. Akimoto discloses: wherein the operations farther comprise: estimating, in real-time, the registration error at a current location of the biomedical device within the anatomic region; and coloring a corresponding portion of the display (see paragraphs [0088], [0104] and [0177]). Akimoto teaches estimating in real-time the registration error and if it’s a valid error at the current location, displaying a visual distinction. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to modify the method disclosed by Ganatra and Glossop to include estimating a registration error for the purpose of efficiently matching image during registration, as taught by Akimoto. Claims 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Ganatra and Glossop, in view of Yu et al., United States 2020/0069373 (hereinafter “Yu”). Claim 12: Ganatra and Glossop to expressly disclose computing virtual images of the patient anatomy. Yu discloses: wherein the operations farther comprise: computing, based, at least in part, on the generated registration, a virtual image of patient anatomy of the anatomic region from a perspective of the image capture device at a current location of the image capture device within the anatomic region (see paragraph [0073]). Yu teaches based on the registration generate a virtual anatomical model according to the medical images and a surgical plan according to the virtual anatomical model, to track the surgical environment according to the spatial information received from the spatial sensor system; and determining a transformation to align the virtual image with an image of the first image data corresponding to the current location of the image capture device (see paragraphs [0137]-[0139]). Yu teaches using transforming operations to align the image with the virtual image so the registration is correct and updated. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to include generating virtual images of the image data for a surgical procedure for the purpose of efficiently creating a virtual display for navigating a surgical instrument during surgery, as taught by Yu. Claim 13: Ganatra and Glossop fail to expressly disclose updating registration based on the virtual images. Yu discloses: wherein updating the registration includes updating the registration based, at least in part, on the determined transformation (see paragraph [0139]). Yu teaches dynamically updating the spatial data based on the position from the images to correspond with the image data. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to include generating virtual images of the image data for a surgical procedure for the purpose of efficiently creating a virtual display for navigating a surgical instrument during surgery, as taught by Yu. Claim 14: Ganatra and Glossop fail to expressly disclose determining transformation based on the virtual images. Yu discloses: wherein the determining the transformation includes: determining the transformation for only a portion of the generated registration within a threshold distance from the current location of the image capture device; or determining the transformation for a specific respiratory and/or cardiac phase of the patient (see paragraph [0139]). Yu teaches the robotic system may determine whether the virtual planning object overlaps with the virtual active space. Additionally, since the virtual environment is a dynamic system, the spatial relationship determined is also dynamically updated according to the dynamically altered object properties of the virtual objects in the virtual environment. If the portion is altered by a calculated distance of the current location then the registration is updated. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to include generating virtual images of the image data for a surgical procedure for the purpose of efficiently creating a virtual display for navigating a surgical instrument during surgery, as taught by Yu. Claim 15: Ganatra and Glossop fail to expressly disclose updating registration based on the virtual images and timestamps. Yu discloses: wherein the operations further comprise: computing, based, at least in part, on the generated registration, a virtual image of patient anatomy of the anatomic region from a perspective of the image capture device at a current or previous location of the image capture device within the anatomic region, wherein the virtual image is associated with a first timestamp (see paragraph [0137]). Yu teaches computing based on the registration a virtual image of the patient and making sure that the endoscope image data spatially and timely coincides with the virtual anatomy in the virtual environment; and determining an image of the first image data that best matches the virtual image, wherein the image of the first image data is included in a group of two or more images of the first image data, and wherein each image of the two or more images is associated with a timestamp occurring within a specified time period before, during, and/or after the first timestamp (see paragraph [137]). Yu discloses the robotic system may define the position and orientation of the endoscope image data through the corresponding registration data. The registration of the endoscope image data to the virtual environment allowed the image/video of the endoscope image data to be displayed on the virtual anatomy. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to include generating virtual images of the image data for a surgical procedure for the purpose of efficiently creating a virtual display for navigating a surgical instrument during surgery, as taught by Yu. Claim 16: Ganatra and Glossop fail to expressly disclose determining difference between timestamps in the virtual images. Yu discloses: determining a difference between (i) a timestamp associated with the image of the first image data that best matches the virtual image and (ii) the first timestamp, and wherein updating the registration includes updating the registration based, at least in part, on the determined difference (see paragraph [0162]). Yu discloses the endoscope image data may be displayed on a virtual endoscope display plane registered in the virtual environment. Alternatively, the endoscope image data may be displayed on a GUI generated region superimposed on the rendering of the virtual environment. Since the endoscope image data is acquired by the endoscope device at a specific camera angle, the user interface may provide a control panel to allow the user of the robotic system to manipulate the viewing angle of rendering of the virtual environment. If the viewing angle of rendering coincides with the camera angle, the user is allowed to observe the endoscope image data augmented on the virtual environment; in other words, the endoscope image data spatially and timely coincides with the virtual anatomy in the virtual environment. The user can determine a difference and correct the view so that the images coincide. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention to include generating virtual images of the image data for a surgical procedure for the purpose of efficiently creating a virtual display for navigating a surgical instrument during surgery, as taught by Yu. Claim 32 is rejected under 35 U.S.C. 103 as being unpatentable over Ganatra and Glossop, in view of Cosman, United States Patent No. 6167295. Claim 32: A medical instrument system for use in an image-guided medical procedure, the system comprising (see paragraph [0017]). Ganatra discloses a medical instrument used in an image-guided medical procedure, the system comprising: a positional sensor configured to generate positional sensor data associated with one or more positions of a biomedical device within an anatomic region of a patient (see paragraphs [0018]-[0020]). Ganatra teaches a sensor that generates location information of the device within an anatomic region; an image capture device configured to capture first image data of patient anatomy within the anatomic region while the biomedical device is positioned within the anatomic region (see paragraphs [0018] and [0019])).Ganatra teaches a bronchoscope configured to acquire medical images of the subject and transmit the images; a processor communicatively coupled to the positional sensor and the image capture device (see paragraph [0024]). Ganatra teaches a processor coupled to sensor and image device; and a memory storing instructions that, when executed by the processor, cause the system to perform operations (see paragraph [0024]) comprising: generating a point cloud of coordinate points based, at least in part, on the positional sensor data (see paragraphs [0020]-[0022]). Ganatra teaches generating a storage of points based on the location data, receiving second image data of the anatomic region, wherein the second image data is generated based, at least in part, on imaging of the anatomic region (see paragraph [0022]). Ganatra teaches receiving second image data from the bronchoscope, adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region (see paragraphs [0020] and [0026]). Ganatra teaches adding coordinates at locations corresponding to the device and the points in the model that coincide with the image. generating a registration between at least a portion of the point cloud and at least a portion of the second image data (see paragraphs [0022]). Ganatra teaches generating registration between the location points and the images from the device, and Ganatra fails to expressly disclose generating additional points based on data from the first image data. Glossop discloses: adding one or more coordinate points to the point cloud at one or more locations corresponding to one or more positions of the image capture device within the anatomic region associated with the first image data (see paragraphs [0028]-[0030] and [0035]-[0038]). Glossop teaches adding point at multiple locations corresponding to interoperative images and pre-operative images. updating the registration based, at least in part, on the first image data (see paragraphs [0038]). Glossop teaches updating registration with the new locations based on the pre-operative image data. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ganatra to include adding coordinate points at one or more locations located with the image capture device associated with the first image data for the purpose of enabling the shape of a flexible endoscope or other instrument to be determined without the use of additional imaging, as taught by Glossop. Ganatra and Glossop discloses all of the elements of this claims (see Claim 1) except matching the patient anatomy in the first image data and the patient anatomy in the second image data. Cosman discloses: the patient anatomy in the first image data and the patient anatomy in the second image data matched (see column 8 lines 26-44 and lines 66 – column 9 line 5). Cosman teaches the image data and patient anatomy is matched. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective date of the claimed invention by Ganatra and Glossop to include matching the images of scans with images of the patient anatomy for the purpose of efficiently registering data while navigating a surgical instrument, as taught by Cosman. Response to Arguments Applicant’s arguments, see REM, filed 12/9/25, with respect to the rejections of Claims 1 and 20 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Ganatra and Glosson. Claims 1 and 20: Applicant argues Nowhere does the Office Action establish that Ganatra's coordinates for sensor X (alleged "one or more locations") correspond to one or more positions of the bronchoscope associated with the bronchoscope images (alleged "first image data"). As such, the Office Action fails to establish a prima facie case of unpatentability, and any subsequent action cannot be made final. The Examiner agrees Ganatra does not teaches images from the bronchoscope. The Examiner uses new art, Glossop, to teach the images from the endoscope are used add additional points bases on the preoperative endoscope images see paragraphs [0028]-[0030] and [0035]-[0038]).. See the above rejections for Claims 1 and 20. Therefore, Ganatra combined with Glossop disclose the limitations of the Claim. Applicant argues Nowhere do cited paragraphs [0024]- [0026] of Ganatra disclose updating the registration based, at least in part, on the bronchoscope images (alleged "first image data"). The Examiner agrees that the registration is updated based on the first image. The Examiner also uses Glossop to teach updating the registration with new points based on the preoperative and interaoperative images (see paragraph [0038]). See the above rejection to Claim 1 and 20. Applicant argues For at least these reasons, Ganatra fails to disclose each and every element of claim 1. Accordingly, claim 1 is allowable over Ganatra. Further, claims 2-3, 6-8, and 17-20, which depend from claim 1 and recite additional features, are allowable over Ganatra for at least the same reasons as those discussed above with respect to claim 1, and further in view of their own respective features. Additionally, independent claim 20 recites some features similar to independent claim 1 and is allowable over Ganatra for at least similar reasons as those discussed above with respect to claim 1, and further in view of its own respective features. The Examiner agrees Ganatra does not teaches each and every element of Claims 1 and 20. The Examiner introduced Glossop to teach the limitations of the claims. The combination of Ganatra and Glossop teaches the elements of Claims 1 and 20. Claim 32: Applicant argues For at least these reasons, Ganatra in view of Cosman fails to teach or suggest all of the features of claim 32. Accordingly, claim 32 is allowable over Ganatra in view of Cosman. For all of the foregoing reasons, Applicant respectfully requests that the rejection of claim 32 under 35 U.S.C. § 103 be withdrawn and that the pending claims be allowed. For the same reasons listed above, the combination of Ganatra, Glossop and Cosman, teaches the elements of Claim 32. Claim 5: Applicant argues Claim 5, which depends from claim 1 and recites additional features, is patentable over Ganatra and Polidor for at least the same reasons as those discussed above with respect to claim 1, and further in view of its own respective features. Accordingly, Applicant respectfully requests that the rejection of claim 5 under § 103 be withdrawn and that the pending claims be allowed. The Examiner agree that Ganatra and Polidor do not teach all the elements of Claim 1. For the same reasons listed above, the combination of Ganatra, Glossop and Polidor, teaches the elements of Claim 1. Claims 9-11: Applicant argues Claims 9-11, which depend from claim 1 and recite additional features, are patentable over Ganatra and Akimoto for at least the same reasons as those discussed above with respect to claim 1, and further in view of their own respective features. Accordingly, Applicant respectfully requests that the rejection of claims 9-11 under § 103 be withdrawn and that the pending claims be allowed. The Examiner agree that Ganatra and Akimoto do not teach all the elements of Claim 1. For the same reasons listed above, the combination of Ganatra, Glossop and Akimoto, teaches the elements of Claim 1. Claims 12-16: Applicant argues Claims 12-16, which depend from claim 1 and recite additional features, are patentable over Ganatra and Yu for at least the same reasons as those discussed above with respect to claim 1, and further in view of their own respective features. Accordingly, Applicant respectfully requests that the rejection of claims 12-16 under § 103 be withdrawn and that the pending claims be allowed. The Examiner agree that Ganatra and Akimoto do not teach all the elements of Claim 1. For the same reasons listed above, the combination of Ganatra, Glossop and Yu, teaches the elements of Claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIONNA M BURKE/Examiner, Art Unit 2178 1/9/26
Read full office action

Prosecution Timeline

Sep 22, 2022
Application Filed
Mar 21, 2025
Non-Final Rejection — §103
May 19, 2025
Examiner Interview Summary
May 19, 2025
Applicant Interview (Telephonic)
Jun 25, 2025
Response Filed
Sep 23, 2025
Final Rejection — §103
Nov 05, 2025
Examiner Interview Summary
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Response after Non-Final Action
Dec 09, 2025
Request for Continued Examination
Jan 04, 2026
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103
Mar 17, 2026
Examiner Interview Summary
Mar 17, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596470
GESTURE-BASED MENULESS COMMAND INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591731
SYSTEM AND METHOD FOR SELECTING RELEVANT CONTENT IN AN ENHANCED VIEW MODE
2y 5m to grant Granted Mar 31, 2026
Patent 12572698
INFRASTRUCTURE METHODS AND SYSTEMS FOR EXTENDING CUSTOMER RELATIONSHIP MANAGEMENT PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12564152
SYSTEM AND METHOD FOR MANAGEMENT OF SENSOR DATA BASED ON HIGH-VALUE DATA MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12547823
DYNAMICALLY AND SELECTIVELY UPDATED SPREADSHEETS BASED ON KNOWLEDGE MONITORING AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
73%
With Interview (+19.3%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 431 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month