Prosecution Insights
Last updated: April 19, 2026
Application No. 18/810,170

SYNCHRONIZED MOTION OF INDEPENDENT SURGICAL DEVICES TO MAINTAIN RELATIONAL FIELD OF VIEWS

Non-Final OA §101§103
Filed
Aug 20, 2024
Examiner
YANG, YI-SHAN
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Cilag GmbH International
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
262 granted / 380 resolved
-1.1% vs TC avg
Strong +57% interview lift
Without
With
+57.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
42 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
32.8%
-7.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings filed on August 20, 2024 are accepted. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 and 7-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 of the subject matter eligibility test (see MPEP 2106.03). Claims 1-13 are directed to an “apparatus” which describes one of the four statutory categories of patentable subject matter, i.e., a machine. Claims 14-20 is directed to a “method” which describes one of the four statutory categories of patentable subject matter, i.e., a process. Step 2A of the subject matter eligibility test (see MPEP 2106.04). Prong One: Claims 1 and 14 recite (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows: “determine, based on the video data stream, a coupled field of view between the first imaging device and the second imaging device; determine, based on the video data steam, that the second image device has moved; and adjust an imaging parameter of the first imaging device to maintain the coupled filed of view”. In claims 1 and 14, the above recited steps can be practically performed in the human mind, with the aid of a pen and paper. If a person were to visually examine, i.e., perform an observation, the video data stream, he/she would be able to identify, i.e., to determine, the FOV of the first imaging device, the FOV of the second imaging device, and the overlap of the two FOVs being the coupled FOV between the two devices. He or she may further visually examine the video data stream to find out if the second image has moved, for example, based on a comparison of the video data collected at different time points. He or she then further would be able to determine how to move the first imaging device based on how the second imaging device has moved or to determine any other settings of the first imaging device (i.e., adjust an imaging parameter) to assist maintaining the coupled field of view. There is nothing recited in the claim to suggest an undue level of complexity in how the coupled FOV is determined, how the movement of the second imaging device is determined, and how the imaging parameter of the first imaging device is adjusted. Therefore, a person would be able to perform the determination and adjustment mentally. Prong Two: Claims 1 and 14 do not include additional elements that integrate the mental process into a practical application. This judicial exception is not integrated into a practical application. In particular, the claims recites additional steps of (1) “a processor being configured to…” and (2) “receiving a video data stream captured by a second imaging device, where the first imaging device is located on a first side of an anatomical barrier, where the second imaging device is located on a second side of the anatomical barrier, and where the first side and the second side are opposing sides of the anatomical barrier”. The steps represent merely data gathering or pre-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality. In addition, in regard to step (1), a claim that requires computer may still recite a mental process. MPEP 2106.04(a)(2).III.C.: “Performing a mental process on a generic computer, in a computer environment, or using a computer as a tool to perform the steps are considered a mental process”. As a whole, the additional elements merely serve to gather and feed information to the abstract idea and to output a notification based on the abstract idea, while generically implementing it on conventionally used tools. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. The claims do not recite any further steps of actively performed steps that utilize the adjusted imaging parameter to achieve any practical application. No improvement to the technology is evident. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application. Step 2B of the subject matter eligibility test (see MPEP 2106.05). Claims 1 and 14 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claims recite additional steps of (1) “a processor being configured to…” and (2) “receiving a video data stream captured by a second imaging device, where the first imaging device is located on a first side of an anatomical barrier, where the second imaging device is located on a second side of the anatomical barrier, and where the first side and the second side are opposing sides of the anatomical barrier”. These steps represents mere data gathering, data outputting or pre/post/extra-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality. For similar reasons set forth in Step 2A, Prong Two above, the additional elements do not provide an inventive concept under Step 2B. Dependent Claims The dependent claims incorporate all the limitations of their respective independent claims. The following analysis focus on the limitations recited in the dependent claims to determine whether they merely recite further abstract idea, or whether or not they recite additional elements that may either amount to significantly more than the abstract idea in their respective independent claims, or may integrate the abstract idea in their respective independent claims to a practical application. The following dependent claims merely further define the abstract idea and are, therefore, directed to an abstract idea for similar reasons as stated in the analysis for their respective independent claims, hence are patent ineligible: compare a first FOV and adjust the imaging parameter based on the comparison, and determine that the coupled FOV is maintained (claim 12) – a similar consideration presented in claim 1 applies here: comparing and adjusting are considered mental steps that he or she may reasonably perform by observing and examining the video data stream; further, to determine that the coupled FOV is maintained, he/she may observe the coupled FOV and determine if it is maintained. The following dependent claims merely further describe the extra-solution activities and therefore, do not amount to significantly more than the judicial exception or integrate the abstract idea into a practical application for similar reasons as stated in the analysis for their respective independent claims, hence are patent ineligible: describing the anatomical barrier (claims 2 and 15); describing the type of the views (claims 3 and 16); describing a further step of generate a visualization that is considered an additional pre-solution data collection step (claims 4-5 and 17-18); describing a further step of data collection by the processor (claims 7-8); describing the imaging parameter (claims 9-10); describing a further component of an imaging sensor for collecting the video data (claim 11- using an imaging sensor to collect video data does not provide an inventive concept under Step 2B); describing a further step of communication by the processor (claim 13) that is considered an insignificant extra-solution activity. Taken alone and in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way. They also do not add anything significantly more than the abstract idea. Their collective functions merely provide computer/electronic implementation and processing, and no additional elements beyond those of the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. There is no indication that the combination of elements improves the functioning of a computer, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter. Based on the above consideration and analysis, claims 1-5 and 7-20 are patent ineligible, i.e., rejected under 35 U.S.C. 101. It is noted that claim 6 is patent eligible. Specifically, claim 6 sets forth steps of performing object registration on an object, and synchronizing the first FOV with the second FOV based on the object registration. The steps of object registration and FOV synchronization are not abstract ideas, nor could they be considered insignificant extra-solution activities. These steps hence, amounts to significantly more than the exception itself as identified in claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-5, 8-12, 14-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Valdes Garcia et al., US 12,430,868 B2, hereinafter Valdes Garcia, in view of Navab et al., US 2021/0378750 A1, hereinafter Navab. Claims 1 and 14. Valdes Garcia teaches in FIG.4 a first imaging device comprising a processor (Col.7, ll.51-53: the system 400 includes a first imaging system 410, a second imaging system 420, and a sensor control and data processing sub-system 430; and Col.7, line 66 to Col.8, line 2: the sensor control and data processing sub-system 430, the first imaging system 410 and the second imaging system 420 are enabled for wireless communications therebetween; and Claim 1: an imaging system comprising a first imaging system that captures initial sensor data in a form of visible domain data), wherein the processor is configured to: receive a video data stream captured by a second imaging device (Col.1, ll.17-20: computer vision algorithms and machine-learning methods have matured significantly and it is now possible to extract information from visible-domain cameras and/or video automatically); determine, based on the video data stream, a coupled field of view between the first imaging device and the second imaging device (Col.8, ll.4-7: the images from both the first imaging system 410 and the second imaging system 420 share a common set of x, y coordinates in a same or similar scene (here, show as same scene 450; and ll.11-17: an object of interest…and it’s location can be identified by a computer vision or machine learing ML algorithm applied on the data captured by the first imaging system. The shared coordinates are used to control the imaging region to which the second imaging system is directed (pointed)) – the “same scene” is considered the “coupled field of view” as claimed; determine, based on the video data stream, that the second imaging device has moved; and adjust an imaging parameter of the first imaging device to maintain the coupled field of view (Claim 1: the first and second imaging system share a common set of x and y coordinates for any captured scenes, the positional data being generated based on the common set, and being used to control an effective field of view (FoV) of the second imaging system, the controller subsystem being further configured for dynamically adjusting an effective Field of View (FoV) of the second imaging system based on real-time coordinates changes detected through the first imaging system) – Since both imaging system share a common coordinate set, the coordinates changes detected through the first imaging system is considered the movement of the second imaging device being determined. Valdes Garcia does not teach that the first imaging device is located on a first side of an anatomical barrier, the second imaging device is located on a second side of the anatomical barrier, and the first side and the second side are opposing sides of the anatomical barrier. However, in an analogous imaging system FOV determination and adjustment field of endeavor, Navab teaches that the first imaging device is located on a first side of an anatomical barrier, the second imaging device is located on a second side of the anatomical barrier, and the first side and the second side are opposing sides of the anatomical barrier ([0008]: a display device defining a plane and being located on a first side of the physical object…a perspective of the rendering is based on a virtual camera that has a virtual position located on a second side of the physical object opposite to the first side; [0066] and [0071]: the sides S1 And S2, and the physical object O; [0064]: The physical object (O) can be a physical anatomy) – the display device that receives the image captured by the camera is considered the first imaging device; and the physical object that is a physical anatomy is the “anatomical barrier” as claimed. Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the claimed invention to have the first imaging device and the second imaging device of Valdes Garcia employ such a feature of them being positioned on the opposite sides of an anatomical barrier as taught in Navab for the advantage of “reducing the complexity of understanding the spatial transformations between the user’s viewpoint, a physical object, 2D and 3D data, and tools during computer-assisted interventions” and “aiding in interaction with a physical objects”, as suggested in Navab, [0059] and [0008]. Claims 2 and 15. Navab further teaches that the anatomical barrier comprises tissue that defines a first anatomical space within which is the first imaging device and a second anatomical space within which is the second imaging device ([0064]: The physical object (O) can be a physical anatomy). A physical anatomy itself is an anatomical space. The side where the first imaging device is positioned is the “first anatomical space” as claimed, and the opposite side where the second imaging device is positioned is the “second anatomical space” as claimed. Claims 4 and 17. Navab further teaches that the processor is further configured to generate a visualization associated with the anatomical barrier in the video data stream based on information from an imaging sensor of the first imaging device ([0068]: these computer images can be rendered on the display device 100. These computer images can be static or dynamic and can include actual/real video graphic images, mixed reality, augmented reality, virtual reality images or models, or any combination thereof). Claims 5 and 18. Navab further teaches that the visualization comprises information from the imaging sensor of the first imaging device that makes a portion of the anatomical barrier appear transparent ([0005]: Prior techniques tracked semi-transparent display for medical in-situ augmentation, which incorporated a head tracker and stereo glasses. Such semi-transparent displays include a half-silvered glass pane, which reflected the image from a computer display. Others have addressed the same problem of creating an AR-window on patient anatomy using a semi-transparent display between the patient and the surgeon) – “semi-transparent” is considered a sub-class of transparent. Claim 8. Valdes Garcia further teaches receive relative field of view positioning information from electromagnetic sensing of the second imaging device (Claim 4: the second imaging system comprises a radar imaging system). Claims 9 and 19. Valdes Garcia further teaches that the imaging parameter comprises an electronically controlled field of view (Col.7, ll.51-53: the system 400 includes a first imaging system 410, a second imaging system 420, and a sensor control and data processing sub-system 430; and Col.7, line 66 to Col.8, line 2: the sensor control and data processing sub-system 430, the first imaging system 410 and the second imaging system 420 are enabled for wireless communications therebetween; and Claim 1: an imaging system comprising a first imaging system that captures initial sensor data in a form of visible domain data) – since the imaging system is controlled by the sensor control and data processing sub-system, the field of view is electronically controlled. Claims 10 and 20. Valdes Garcia further teaches that the imaging parameter comprises any of a position of the first imaging device, a focal length of the first imaging device, or a portion of a field of view associated with the first imaging device that is displayed to a user (Claim 1: the first and second imaging system share a common set of x and y coordinates for any captured scenes, the positional data being generated based on the common set, and being used to control an effective field of view (FoV) of the second imaging system, the controller subsystem being further configured for dynamically adjusting an effective Field of View (FoV) of the second imaging system based on real-time coordinates changes detected through the first imaging system). Claim 11. Valdes Garcia further teaches an imaging sensor that senses light of a first band, wherein the video data stream captured by the second imaging device represents light of a second band that is different from the first band (Claim 3: the first imaging system comprises an imaging device selected from the group consisting of an RGB camera, an InfraRed (IR) camera, and a thermal camera. Claim 4: the second imaging system comprises a radar imaging system) – an IR camera senses a different EM wave band from a radar imaging system that senses radar waves. Claim 12. Valdes Garcia further teaches iteratively compare a first field of view associated with the first imaging device to the video data stream, and adjust the imaging parameter based on the comparison; and on a condition that the first field of view and the video data stream are aligned with respect to a registered object, determine that the coupled field of view is maintained (Claim 1: the first and second imaging system share a common set of x and y coordinates for any captured scenes, the positional data being generated based on the common set, and being used to control an effective field of view (FoV) of the second imaging system, the controller subsystem being further configured for dynamically adjusting an effective Field of View (FoV) of the second imaging system based on real-time coordinates changes detected through the first imaging system) – to dynamically adjust an effective FOV based on real-time coordinates changes is considered to iteratively compare and adjust for alignment as claimed. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Valdes Garcia et al., US 12,430,868 B2, hereinafter Valdes Garcia, in view of Navab et al., US 2021/0378750 A1, hereinafter Navab, further in view of Young et al., US 2020/0008768 A1, hereinafter Young. Claim 6. Valdes Garcia and Navab combined teaches all the limitations of claim 1, including a coupled field of view between the first imaging device and the second imaging device (Valdes Garcia: the same scene 450). Neither Valdes Garcia nor Navab teaches that the coupled field of view is obtained by performing object registration on an object, wherein the object is visible in a first field of view associated with the first imaging device and in a second field of view associated with the second imaging device; and synchronize the first field of view with the second field of view based on the object registration. However, in an analogous field of imaging data process to couple field of views field of endeavor, Young teaches performing object registration on an object, wherein the object is visible in a first field of view associated with the first imaging device and in a second field of view associated with the second imaging device; and synchronize the first field of view with the second field of view based on the object registration to obtain a composite field of view ([0169]: combining the first X-ray image data and the second X-ray image data to obtain an output image of the region of interest having a composite field of view; [0132]: to combine the first X-ray image data and the second X-ray image data using an image sticking algorithm. An image stitching algorithm would require the registration of the first X-ray image data to the second X-ray image data, calibration of the two images to each other, then the blending of the two images). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the claimed invention to have the composite field of view Valdes Garcia and Navab combined employ such a feature of performing object registration on an object, wherein the object is visible in a first field of view associated with the first imaging device and in a second field of view associated with the second imaging device; and synchronize the first field of view with the second field of view based on the object registration to obtain a composite field of view as taught in Young for the advantage of “obtaining a non-erroneous field of view in order to not miss important anatomical details”, as suggested in Young, [0002]. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Valdes Garcia et al., US 12,430,868 B2, hereinafter Valdes Garcia, in view of Navab et al., US 2021/0378750 A1, hereinafter Navab, further in view of Ma et al., US 2015/0236848 A1, hereinafter Ma. Claim 13. Valdes Garcia and Navab combined teaches all the limitations of claim 1. Valdes further teaches that the first and the second imaging device are communicated via wireless communications (Col.7, ll.51-53: the system 400 includes a first imaging system 410, a second imaging system 420, and a sensor control and data processing sub-system 430; and Col.7, line 66 to Col.8, line 2: the sensor control and data processing sub-system 430, the first imaging system 410 and the second imaging system 420 are enabled for wireless communications therebetween). Neither Valdes Garcia nor Navab teaches that the communication is via a handshake protocol with the second imaging device to establish cooperative operation. However, in an analogous imaging device wireless communication field of endeavor, Ma teaches that the imaging devices communicate via a handshake protocol to establish cooperative operation ([0017]: a method for time-reversal wireless communication includes, at a first device, receiving a handshake signal transmitted from a second device…the handshake signal including a sequence of codes known to the first and second device, performing cross correlation operations on the known sequence of codes and the received handshake signal). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the claimed invention to have the first imaging device and the second imaging device of Valdes Garcia and Navab combined employ such a feature of communicating via a handshake protocol to establish cooperative operation as taught in Ma for the advantage well-recognized in the field of art of providing a synchronized operation between the devices. Allowable Subject Matter Claims 3, 7 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The limitation recited in claims 3 and 16 in regard to the features of “the processor is further configured to generate an endoscopic view and wherein the video stream captured by the second imaging device represents a laparoscopic view", in combination with the other claimed elements, is not taught or disclosed in the prior arts. The limitation recited in claim 7 in regard to the features of “the coupled field of view is obtained by receiving relative field of view positioning information from cone-beam computerized tomography imaging that comprises a view of the first imaging device and the second imaging device", in combination with the other claimed elements, is not taught or disclosed in the prior arts. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI-SHAN YANG whose telephone number is (408) 918-7628. The examiner can normally be reached Monday-Friday 8am-4pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal M Bui-Pho can be reached at 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YI-SHAN YANG/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Aug 20, 2024
Application Filed
Feb 22, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594043
METHODS AND SYSTEMS FOR FAST FILTER CHANGE
2y 5m to grant Granted Apr 07, 2026
Patent 12594003
DEVICE, SYSTEM AND METHOD FOR DETERMINING RESPIRATORY INFORMATION OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12594063
TISSUE IMAGING IN PRESENCE OF FLUID DURING BIOPSY PROCEDURE
2y 5m to grant Granted Apr 07, 2026
Patent 12592318
Neuronal Activity Mapping Using Phase-Based Susceptibility-Enhanced Functional Magnetic Resonance Imaging
2y 5m to grant Granted Mar 31, 2026
Patent 12575805
ULTRASOUND PROBE WITH AN INTEGRATED NEEDLE ASSEMBLY AND A COMPUTER PROGRAM PRODUCT, A METHOD AND A SYSTEM FOR PROVIDING A PATH FOR INSERTING A NEEDLE OF THE ULTRASOUND PROBE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+57.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month