Prosecution Insights
Last updated: April 19, 2026
Application No. 18/461,630

SURGICAL ROBOTIC SYSTEM AND METHOD WITH MULTIPLE CAMERAS

Final Rejection §103
Filed
Sep 06, 2023
Examiner
DEMOSKY, PATRICK E
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Covidien LP
OA Round
4 (Final)
65%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
55%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
244 granted / 377 resolved
+6.7% vs TC avg
Minimal -10% lift
Without
With
+-9.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
399
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
61.5%
+21.5% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 377 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 2/23/2026 have been fully considered but they are drawn towards newly amended claim language. Regarding Rejections under 35 U.S.C. § 103, Applicant contends that the cited prior art fails to disclose newly amended limitations of independent claims 1, 9, and 17 including: “wherein the first camera is inserted into the tissue structure to capture the first video stream of the internal surface and the second camera is positioned outside the tissue structure to capture the second video stream of the external surface” and “wherein the modified video stream includes an augmented view of the tissue structure generated by replacing a portion of one of the first video stream or the second video stream with a reconstructed portion based on the other of the first video stream or the second video stream”. See the rejection below for how the cited art in light of new/existing references reads on the newly amended language as well as the examiner’s interpretation of the cited art in view of the presented claim set. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-5, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Klontz (WO 2022103770 A1) (hereinafter Klontz) in view of Shelton, IV et al. (US 2023011079 A1) (hereinafter Shelton). Regarding claim 1, Klontz discloses: An imaging system comprising: [See Klontz, Fig. 1 illustrates an imaging system (100).] a first camera configured to capture a first video stream of a first tissue surface; [See Klontz, ¶ 0011, 0013-0014, Fig. 1 illustrates a first camera (111A) capturing a first image stream (127A) of a tissue or structure in the obtained images.] a second camera configured to capture a second video stream of a second tissue surface, [See Klontz, ¶ 0011, 0013-0016 Fig. 1 illustrates a second camera (111B) capturing a second image stream (127B) of a tissue or structure in the obtained images. Particularly, the obstruction can be a sensor housing (e.g., sensor housing 217) or it can be a portion of the subject’s 117 bowels in the subject’s 117 body cavity.] a video processing device configured to: [See Klontz, ¶ 0013-0016, Fig. 1 discloses an imaging controller (105).] receive the first video stream from the first camera and the second video stream from the second camera; and [See Klontz, Fig. 1 illustrates an imaging controller (105) receiving a first image stream 127A and a second image stream 127B from first and second cameras, respectively.] Klontz does not appear to explicitly disclose: wherein the first tissue surface includes an internal surface of a tissue structure and the second tissue surface includes an external surface of the tissue structure, such that the first and second tissue surfaces are opposing surfaces of a common tissue wall, wherein the first camera is inserted into the tissue structure to capture the first video stream of the internal surface and the second camera is positioned outside the tissue structure to capture the second video stream of the external surface; modify at least one the first video stream or the second video stream based on one of the first video stream or the second video stream to generate a 3D volume having a modified video stream, wherein the modified video stream includes an augmented view of the tissue structure generated by replacing a portion of one of the first video stream or the second video stream with a reconstructed portion based on the other of the first video stream or the second video stream; and a first screen coupled to the video processing device and configured to display the 3D volume having the modified video stream. However, Shelton discloses: wherein the first tissue surface includes an internal surface of a tissue structure and the second tissue surface includes an external surface of the tissue structure, such that the first and second tissue surfaces are opposing surfaces of a common tissue wall, [See Shelton, ¶ 0011, 0017 discloses determining, by the controller and based on the transmitted image data, i) a first interaction location configured to be created inside of at least one of the natural body lumen and the organ by the surgical instrument, and ii) a second interaction location configured to be created outside of at least one of the natural body lumen and the organ; See Shelton, ¶ 0201, 0208, 0259-0260, 0275 Fig. 26 discloses monitoring of interior and exterior portions of interconnected surgical instruments can be performed in order to be image both the internal and external interactions of the surgical instruments with adjacent surgical instruments. Noting Fig. 26, element 2122 corresponds with a surgical instrument cooperating with an exterior of an organ/tissue of interest, and it is noted that said surgical instrument is, or includes an optical sensor (e.g. a camera).] wherein the first camera is inserted into the tissue structure to capture the first video stream of the internal surface and the second camera is positioned outside the tissue structure to capture the second video stream of the external surface; [See Shelton, Fig. 26 illustrates a channel arm of a surgical instrument inserted on an interior of a tissue structure, and a “second camera” surgical instrument component 2122 on an exterior portion of the tissue structure.] modify at least one the first video stream or the second video stream based on one of the first video stream or the second video stream to generate a 3D volume having a modified video stream, [See Shelton, ¶ 0271-0272 discloses generating a 3D model of an instrument which can be overlaid into an image of the system which cannot see an alternate view. Since the representative depiction is a generated image, various properties of the image (e.g., the transparency, color) can also be manipulated to allow the system to be clearly shown as not within the real-time visualization video feed, but as a construct from the other view. If the user where to switch between imaging systems, the opposite view could also have the constructed instruments within its field of view.] wherein the modified video stream includes an augmented view of the tissue structure generated by replacing a portion of one of the first video stream or the second video stream with a reconstructed portion based on the other of the first video stream or the second video stream; and [See Shelton, ¶ 0085-0086, 0091, 0120, 0130, 0271-0272 discloses generating a 3D model of an instrument which can be overlaid into an image of the system which cannot see an alternate view, and additionally augmenting a view of a surgical site including tissue structures, and accounting for structures which may be occluded/obstructed or otherwise concealed.] a first screen coupled to the video processing device and configured to display the 3D volume having the modified video stream. [See Shelton, ¶ 0271 discloses that, based on the determined relative distances 3123, 3124, 3128, and the transmitted image data (e.g., of the first scene, the second scene, or both), the controller can create a merged image that is projected onto the first display 3132, the second display 3134, or both.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Klontz to add the teachings of Shelton in order to overlay captured images of a surgery site to enhance a positional awareness/understanding by an operator. Regarding claim 2, Klontz in view of Shelton discloses all the limitations of claim 1. Klontz discloses: further comprising: a second screen coupled to the video processing device. [See Klontz, ¶ 0016-0017 discloses that the display device 107 can be one or more devices that can display the enhanced display 145 for an operator of the imaging units 111 A, 111B.] Regarding claim 3, Klontz in view of Shelton discloses all the limitations of claim 2. Klontz discloses: wherein the video processing device is configured to output at least one of the first video stream, the second video stream, or 3D volume having the modified video stream on at least one of the first screen or the second screen. [See Klontz, ¶ 0015-0019 discloses the combined image stream 133 can provide an enhanced display 145 of the subject 117, who can be a surgical patient. For example, the imaging controller 105 can generate the combined image stream 133 to “selectably” enhance the field-of-view of one of the imaging units 111 A, 111B based on the field-of-view of the other imaging unit 111 A, 111B. In some implementations, the enhanced display 145 may be a stereoscopic 3D view from the perspective of one of the imaging units 111 A, 111B.] Regarding claim 4, Klontz in view of Shelton discloses all the limitations of claim 1. Klontz discloses: wherein in modifying the first video stream, the video processing device is further configured to remove a portion of the first video stream. [See Klontz, ¶ 0014-0019, 0030 discloses that the image processing module 359 can include program instructions that, using the image signal 127 from an imaging sensor (e.g., image sensor 231), register and stitch the images to generate the image stream 127. The image processor 315 can be a device configured to receive an image signal 365 from an image sensor (e.g., image sensor 231) and condition images included in the image signal 365. In accordance with aspects of the present disclosure, conditioning the image signal 365 can include normalizing the size, exposure, and brightness of the images. Also, conditioning the image signal 365 can include removing visual artifacts.] Regarding claim 5, Klontz in view of Shelton discloses all the limitations of claim 4. Klontz discloses: wherein in modifying the first video stream, the video processing device is further configured to fill in the portion of the first video stream with a reconstructed portion of the second video stream. [See Klontz, ¶ 0015-0019, 0030, 0037 discloses registering and stitching together images in the fields of view of imaging units 111A and 111B based on spatial information. It is noted that the imaging controller can identify and remove images of physical structures in the overlapping fields of view of the imaging unit 111A and 111B (and corresponding image streams 127A and 127B, respectively). It is understood by one of ordinary skill that during such a stitching procedure, overlapping structures or portions of (for instance) a “first” video stream would be supplemented by corresponding (reconstructed) portions of a second video stream.] Regarding claim 17, this claim recites analogous limitations to claim 1 in the form of “a method” rather than “a system”, and is therefore rejected on the same premise. Regarding claim 18, this claim recites analogous limitations to claim 4, and is therefore rejected on the same premise. Regarding claim 19, this claim recites analogous limitations to claim 5, and is therefore rejected on the same premise. Claim(s) 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Klontz in view of Shelton in view of Govari et al. (US 20200107886 A1) (hereinafter Govari). Regarding claim 6, Klontz in view of Shelton discloses all the limitations of claim 5. Klontz in view of Shelton does not appear to explicitly disclose: wherein the video processing device is further configured to generate a depth map of the first and second tissue surfaces and generate the reconstructed portion based on the depth map. However, Govari discloses: wherein the video processing device is further configured to generate a depth map of the first and second tissue surfaces and generate the reconstructed portion based on the depth map. [See Govari, ¶ 0019, 0023, 0025, 0071, 0116, 0118-0120 discloses performing an image depth analysis to determine a set of 3D characteristics for each of the one or more 2D images, wherein the set of 3D characteristics includes a depth of pixels. Further, when performing the image depth analysis, (i) identify a point in a first image of the set of image data, wherein the point includes a portion of the surgical site that is present within both the first image captured by a first camera of the two or more cameras and within a second image captured by a second camera of the two or more cameras, (ii) identify the point in the second image, (iii) determine a displacement of the point from the first image to the second image, and (iv) determine the depth of pixels for the point based on the displacement. The distance between each unfocused image may be used to calculate the depth or distance to the imaged target. Images captured by the wavefront imaging device (304) may be provided as input to a wavefront sampling algorithm, such as Frigerio's multi image procedure, in order to produce a 3D depth map and 3D model of the imaged object.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Klontz in view of Shelton to add the teachings of Govari in order to produce a 3D depth map/model of an imaged tissue sample. Regarding claim 20, this claim recites analogous limitations to claim 6, and is therefore rejected on the same premise. Please see examiner’s earlier rejection of claim 6 for corresponding motivation statement. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable Klontz in view of Shelton in view of Roberts et al. (US 20170085855 A1) (hereinafter Roberts). Regarding claim 7, Klontz in view of Shelton discloses all the limitations of claim 2. Klontz in view of Shelton does not appear to explicitly disclose: wherein the video processing device is further configured to generate a virtual marker in the 3D volume having the modified video stream. However, Roberts discloses: wherein the video processing device is further configured to generate a virtual marker in the 3D volume having the modified video stream. [See Roberts, ¶ 0034, 0037 discloses that system 100 includes an interactive controller 170 adapted to select one of a plurality of predefined stored views 160 of 3D model 134 and annotating data 108. Stored views 160 define a plurality of views that combine annotating data 108 with stereo image stream 107 to generate enhanced stereo image stream 103. For example, one view may select a particular annotating data 108 for enhancing stereo image stream 103. Each view within stored views 160 also defines which portions of annotating data 108 to select for combining with stereo image stream 107. For example, where a surgical incision is planned (see planned surgical procedures 760 of FIG. 7), stored view 160 may define a view that graphically indicates planned incisions within enhanced stereo image stream 103, and may further define a corresponding portion of annotating data 3D model 222 for display with the planned incision to indicate tissue structures beneath the intended point of incision. This provides the surgeon with additional information relative to the next immediate step in the planned operational procedure.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Klontz in view of Shelton to add the teachings of Roberts in order to enable continually updating a 3D model based on a stereo image stream to reactively shift, rotate, warp, or scale annotating data to match a 3D model within a surgical imaging environment. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Klontz in view of Shelton in view of Johnson et al. (US 20190183591 A1) (hereinafter Johnson). Regarding claim 8, Klontz in view of Shelton discloses all the limitations of claim 2. Klontz in view of Shelton does not appear to explicitly disclose: wherein at least one of the first screen or the second screen includes a graphical user interface configured to select at least one of the first screen or the second screen to display at least one of the first video stream, the second video stream, or the 3D volume having the modified video stream. However, Johnson discloses: wherein at least one of the first screen or the second screen includes a graphical user interface configured to select at least one of the first screen or the second screen to display at least one of the first video stream, the second video stream, or the 3D volume having the modified video stream. [See Johnson, ¶ 0005, 0008, 0103 discloses a graphical user interface includes a plurality of reconfigurable display panels, receiving a user input at one or more user input devices, wherein the user input indicates a selection of at least one software application relating to the robotic surgical system, and rendering content from the at least one selected software application among the plurality of reconfigurable display panels. An endoscopic image of a surgical site may additionally or alternatively be displayed on the display. In some variations, the method may further include reconfiguring a layout of at least a portion of the display panels. For example, at least one display panel may be repositioned and/or resized. As another example, content from a second selected software application may be rendered in the at least one display panel. Furthermore, in some variations, the method may include mirroring at least some of the rendered content onto a second display. Further, that a user (such as a surgeon at a user console) may mark up an endoscopic image with annotations using a drawing tool (with a cursor 1012). For example, an annotation 1010 is a circle drawn around a portion of tissue to identify a region of tissue. As another example, an annotation 1012 is an arrow indicating a direction that the tissue identified in 1010 may be retracted. The annotated image (telestration) may be sent to a second display that is displaying another instance of the GUI (e.g., at a control tower where the second display may be viewable by other members of the surgical team) to help communicate a surgical plan. As shown in FIG. 10B, the second display may display the telestration with the mirrored annotations.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Klontz in view of Shelton to add the teachings of Johnson in order to enable a user to selectively control image streams or annotated images to be sent to first or second displays. Claim(s) 9-13 is/are rejected under 35 U.S.C. 103 as being unpatentable Klontz in view of Shelton in view of Schelter et al. (US 20210085164 A1) (hereinafter Schelter). Regarding claim 9, this claim recites analogous limitations to claim 1, and is therefore rejected on the same premise. Further, claim 9 recites the following limitations which are not explicitly found from claim 1, but are addressed as follows: Klontz discloses: A surgical robotic system comprising: [See Klontz, Fig. 1 illustrates a surgical imaging system (100).] a video processing device configured to: [See Klontz, ¶ 0013-0016, Fig. 1 discloses an imaging controller (105).] receive the first video stream from the first camera and the second video stream from the second camera; [See Klontz, Fig. 1 illustrates an imaging controller (105) receiving a first image stream 127A and a second image stream 127B from first and second cameras, respectively.] a surgeon console including a first screen coupled to the video processing device and configured to display the combined video stream. [See Klontz, 0017, Fig. 1 illustrates a screen 107 coupled to the imaging controller 105 and suitably able to display the video stream processed by the imaging controller.] Shelton discloses: combine the first video stream based on the second video stream to generate a combined video stream based on the first position and the second position; and [See Shelton, ¶ 0271-0272 discloses generating a 3D model of an instrument which can be overlaid into an image of the system which cannot see an alternate view. Since the representative depiction is a generated image, various properties of the image (e.g., the transparency, color) can also be manipulated to allow the system to be clearly shown as not within the real-time visualization video feed, but as a construct from the other view. If the user where to switch between imaging systems, the opposite view could also have the constructed instruments within its field of view; See Shelton, ¶ 0271-0272 discloses generating a 3D model of an instrument which can be overlaid into an image of the system which cannot see an alternate view; See Shelton, ¶ 0271 discloses that, based on the determined relative distances 3123, 3124, 3128, and the transmitted image data (e.g., of the first scene, the second scene, or both), the controller can create a merged image that is projected onto the first display 3132, the second display 3134, or both.] generate a 3D volume including a trajectory of at least one of the first camera, the second camera, or the instrument based on the combined video stream; and [See Shelton, ¶ 0271-0272 discloses generating a 3D model of an instrument which can be overlaid into an image of the system which cannot see an alternate view.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Klontz to add the teachings of Shelton in order to overlay captured images of a surgery site to enhance a positional awareness/understanding by an operator. Klontz in view of Shelton does not appear to explicitly disclose: a first robotic arm including a first camera located at a first position and configured to capture a first video stream of a first tissue surface; a second robotic arm including a second camera located at a second position and configured to capture a second video stream of a second tissue surface, a third robotic arm including a surgical instrument; However, Schelter discloses: a first robotic arm including a first camera located at a first position and configured to capture a first video stream of a first tissue surface; [See Schelter, ¶ 0025 discloses a stabilization component may be an adjustable stabilization device, multiple adjustable stabilization devices, an adjustable secure arm (25), more than one adjustable secure arm, or the like which may allow scopes to be positioned perhaps in a stable but adjustable manner as may be understood from FIG. 3. A stabilizing component may be a connected securement support (26) which can stabilize a first camera, a second camera, and even a third camera or more which may be understood from FIG. 2.] a second robotic arm including a second camera located at a second position and configured to capture a second video stream of a second tissue surface, [See Schelter, ¶ 0025 discloses a stabilization component may be an adjustable stabilization device, multiple adjustable stabilization devices, an adjustable secure arm (25), more than one adjustable secure arm, or the like which may allow scopes to be positioned perhaps in a stable but adjustable manner as may be understood from FIG. 3. A stabilizing component may be a connected securement support (26) which can stabilize a first camera, a second camera, and even a third camera or more which may be understood from FIG. 2.] a third robotic arm including a surgical instrument; [See Schelter, ¶ 0016 discloses a third endoscope with a third camera (19) may be placed or even inserted at a third position (20) near a surgical target area of a patient. An endoscope may be an instrument that may be used as a viewing system for examining an inner part of the body which may be a slender, tubular optical instrument or the like. Another instrument may be used for biopsy or surgery or the like and may be attached to or separate from an endoscope. An endoscope may include a camera or other device which can capture an image or images. A surgical target area of a patient may be an area of an inside of a body where surgery or other procedures may need take place. This could be for an endoscopic procedure, a laparoscopic procedure, a biopsy, or the like and may be an endoscopic area, a laparoscopic area, joint, other areas of the body, or the like.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Klontz in view of Shelton to add the teachings of Schelter in order to provide enhanced visualization of laparoscopic space with multiple camera views on a screen or multiple screens. Regarding claim 10, this claim recites analogous limitations to claim 2, and is therefore rejected on the same premise. Regarding claim 11, this claim recites analogous limitations to claim 3, and is therefore rejected on the same premise. Regarding claim 12, Klontz in view of Shelton in view of Schelter discloses all the limitations of claim 9. Klontz discloses: wherein in combining the first video stream, the video processing device is further configured to stitch the first video stream with the second video stream. [See Klontz, ¶ 0014-0019, 0030 discloses that the image processing module 359 can include program instructions that, using the image signal 127 from an imaging sensor (e.g., image sensor 231), register and stitch the images to generate the image stream 127. The image processor 315 can be a device configured to receive an image signal 365 from an image sensor (e.g., image sensor 231) and condition images included in the image signal 365. In accordance with aspects of the present disclosure, conditioning the image signal 365 can include normalizing the size, exposure, and brightness of the images. Also, conditioning the image signal 365 can include removing visual artifacts.] Regarding claim 13, this claim recites analogous limitations to claim 5, and is therefore rejected on the same premise. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Klontz in view of Shelton in view of Schelter in view of Govari. Regarding claim 14, this claim recites analogous limitations to claim 6, and is therefore rejected on the same premise. Please see examiner’s earlier rejection of claim 6 for corresponding motivation statement. Claim(s) 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Klontz in view of Shelton in view of Schelter in view of Roberts. Regarding claim 15, this claim recites analogous limitations to claim 7, and is therefore rejected on the same premise. Please see examiner’s earlier rejection of claim 7 for corresponding motivation statement. Regarding claim 16, Klontz in view of Shelton in view of Schelter discloses all the limitations of claim 15. Roberts discloses: wherein the virtual marker is placed at a location within a field of view of the second camera. [See Roberts, ¶ 0034-0037 discloses that by displaying structure that is not readily or directly visible, the surgeon is made aware of such structures and their relative positions without the need to look away from the surgical field to reference other sources that may distract the surgeon from the procedure, or which may not represent current positions of the structures. That is, system 100 makes information of enhanced data 108 available within the same field of view as surgical field 184, thereby lessening need for the surgeon to look away from stereo viewer 102 or from surgical field 184.] The reasons to combine the cited prior art are applicable to those presented for previously rejected claim 7. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK E DEMOSKY whose telephone number is (571)272-8799. The examiner can normally be reached Monday - Friday 7-4 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on 5712727384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK E DEMOSKY/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Mar 20, 2025
Non-Final Rejection — §103
Jun 25, 2025
Response Filed
Jul 15, 2025
Final Rejection — §103
Oct 16, 2025
Response after Non-Final Action
Nov 17, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Nov 24, 2025
Non-Final Rejection — §103
Feb 23, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586178
GRADING COSMETIC APPEARANCE OF A TEST OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12579873
SECURITY CAMERA SYSTEM WITH MULTI-DIRECTIONAL MOUNT AND METHOD OF OPERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574515
QUANTIZATION MATRIX ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 10, 2026
Patent 12563235
CONFIGURABLE NAL AND SLICE CODE POINT MECHANISM FOR STREAM MERGING
2y 5m to grant Granted Feb 24, 2026
Patent 12556685
IMAGE ENCODING/DECODING METHOD AND APPARATUS, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
65%
Grant Probability
55%
With Interview (-9.7%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 377 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month