Prosecution Insights
Last updated: April 19, 2026
Application No. 17/823,058

DUAL CAMERA TRACKING SYSTEM

Non-Final OA §103
Filed
Aug 29, 2022
Examiner
DEMOSKY, PATRICK E
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Sony Interactive Entertainment Inc.
OA Round
6 (Non-Final)
65%
Grant Probability
Moderate
6-7
OA Rounds
3y 1m
To Grant
55%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
244 granted / 377 resolved
+6.7% vs TC avg
Minimal -10% lift
Without
With
+-9.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
399
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
61.5%
+21.5% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 377 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments filed 2/24/2026 have been fully considered, but they are drawn towards newly amended claim language. Regarding Rejections under 35 U.S.C. § 103, Applicant contends that the cited prior art fails to disclose newly amended limitations of independent claims 1, including: “analyze image data of the first auxiliary image to identify a subject of interest independently of a line of sight of a user”. See the rejection below for how the cited art in light of new/existing references reads on the newly amended language as well as the examiner’s interpretation of the cited art in view of the presented claim set. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office Action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission received 2/24/2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 4-5, 14, 16-18, and 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Feng et al. (WO 2021184341 A1) (hereinafter Feng) in view of Chang et al. (US 20190130869 A1) (hereinafter Chang) in view of Andrai et al. (WO 2022164094 A1) (hereinafter Andrai) in view of Nir et al. (US 20160345802 A1) (hereinafter Nir). Regarding claim 1, Feng discloses: An assembly, comprising: [See Feng, ¶ 032, FIG. 1A illustrates an exemplary system 100, also referred to herein as autofocusing system 100 or camera system 100, for autofocus using a deep depth-of-field sensor guide that may be used in accordance with certain disclosed embodiments.] at least one main camera configured to receive light through at least one lens and generate images based thereon; [See Feng, ¶ 040 discloses as shown in FIG. 2A, camera system 200 includes main camera 202 and auxiliary camera 204. In some embodiments, the shallow DOF (depth of field) of main camera 202 may be provided by a lens assembly with a large aperture, a long focal length, and/or a large sensor size.] at least one auxiliary camera configured to generate a first auxiliary image; and [See Feng, ¶ 040 discloses auxiliary camera 204 may include a deep DOF sensor with a focus range that covers a large distance range front-to-back (e.g., from several meters in front of the focal plane to nearly infinity behind), capturing objects within a large range of landscape view with acceptable visual clarity. In some embodiments, the deep DOF of auxiliary camera 204 may be provided by a lens assembly with a small aperture, a short focal length, and/or a small sensor size; See Feng, ¶ 032 discloses autofocusing system 100 includes one or more processors 102 connected to a main camera 104, one or more auxiliary cameras 106 and inputs and outputs 108.] at least one processor configured with instructions to: [See Feng, Fig. 1A illustrates element (102) as “one or more processors”.] establish a focal length of the lens to the subject of interest; and. [See Feng, ¶ 079 discloses in step 1010, a first region of interest (ROI) in a first view of a scene captured by a first camera ( e.g., auxiliary camera 106, 204, 208, 210, or 404) is determined ( e.g., by system 100 or system 120, such as an ROI determination module 150 of system 120). In some embodiments, the first ROI is determined based on first image data associated with the first view that is captured by and obtained from the first camera (e.g., by image obtaining and processing module 148 of system 120). The first camera may be configured to continuously capture the first view of the scene. The first camera may be associated with a first DOF; See Feng, ¶ 080 discloses in step 1020, a second camera (e.g., main camera 104,202,206, or 402) is caused to focus on a second ROI in a second view corresponding to the determined first ROI; See Feng, ¶ 032 discloses one or more processors I 02 may be configured to produce outputs based on information received from main camera 104 and one or more auxiliary cameras 106. One or more processors 102 may be configured to receive information from one or more auxiliary cameras 106 and provide instructions to main camera 104; See Feng, ¶ 046, 067-068 discloses that a first person 310 is located within focus depth 306 of a main camera and a second person 312 is within focus depth 308 of auxiliary camera but beyond focus depth 306 of the main camera. In this example, the auxiliary camera could capture the activity of second person 312, and the related image data can be used to calculate information associated with the second person’s position, such as positional data in the real space, or positional data relative to the view captured by the second camera. In the present embodiment, images or videos with cinema-like focus on different subjects in casual videography can be captured by a shallow DOF camera with accurate and distinct focuses on respective subjects (e.g., people and/or objects). Deep DOF video images can be shot with a main camera with a large FOV and a shallow DOF. Auxiliary camera (s) with deep DOF can be used to guide /assist the main camera to focus on other ROIs identified by auxiliary camera(s).] Feng does not appear to explicitly disclose: a head-mounted apparatus with a view finder on which the at least one main and at least one auxiliary camera are mounted; However, Chang discloses: a head-mounted apparatus with a view finder on which the at least one main and at least one auxiliary camera are mounted; [See Chang, Fig. 3b, ¶ 0021, 0033-0034 discloses the image acquisition system 20 is responsible for capturing images of the surrounding environment. The image acquisition system 20 may include one or more cameras. Further, that the image acquisition system 20 includes two cameras which are disposed in the left-front portion and right-front portion of the head-mounted display device 100 and are used to capture the images of the left eye sight and the right eye sight of the user/wearer. The left-front portion of the head-mounted display device 100 is disposed with a viewfinder which may be provided with a hollow space or see-through optics and may enable the left eye sight of the user/wearer to see through the head-mounted display device 100.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Feng to add the teachings of Chang in order to provide a head-mounted display with a view-finder and attached camera systems for imaging an environment surrounding a user. Feng in view of Chang does not appear to explicitly disclose: simultaneously display, at the head-mounted apparatus, the generated images and the first auxiliary image that shows the subject of interest. However, Andrai discloses: simultaneously display, at the head-mounted apparatus, the generated images and the first auxiliary image that shows the subject of interest. [See Andrai, ¶ 0198-0199, 0202-0203 discloses that the HMD (100) can display a second image (720) obtained through the process of FIG. 2 by overlaying it on a real scene image (710) captured in front of the HMD (100) or in front of the shooting unit (140); See Andrai, ¶ 0084 discloses that the camera unit (140) may include various types of cameras built into the HMD (100), such as at least one of a monocular camera, a binocular camera, or an infrared camera; See Andrai, Fig. 7 illustrates simultaneously displaying two image streams from various cameras mounted to the HMD.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Feng in view of Chang to add the teachings of Andrai in order to enhance user awareness by presenting simultaneous display of different captured fields of view. Feng in view of Chang in view of Andrai does not appear to explicitly disclose: based at least in part on the first auxiliary image, identify a subject of interest, wherein the subject of interest is indicated by a visual indicator shown in the first auxiliary image, and wherein the visual indicator provides location information about the subject of interest; However, Nir discloses: analyze image data of the first auxiliary image to identify a subject of interest independently of a line of sight of a user, [See Nir, ¶ 0075-0078, 0174, 0195-0203, 0210 discloses a computer program executed by a data processing apparatus to identify from at least one image of a field of view, the occurrence of at least one item of interest; See Nir, Figs. 7a-7b, 8a-8b, 9a-9d] wherein the subject of interest is indicated by a visual indicator shown in the first auxiliary image, and wherein the visual indicator provides location information about the subject of interest; [See Nir, Figs. 7a-7b, 8a-8b, 9a-9d, ¶ 0195-0203, 0210 discloses a visual warning/popup indicating the existence of an item of interest outside the field of view of a display. In this embodiment, the location of the popup indicates that the item of interest is in the lower left quadrant of the field of view of the fisheye lens (710). A popup can be in a fixed position, it can use an arrow or other directional symbol to indicate the direction of the item of interest with respect to the icon or with respect to a fixed position (such as the center of the field of view), it can use different weight or different color symbols to indicate a distance to the item of interest, or a text message indicating direction, distance or both. The text message can be on the warning, or it can form part of a separate warning, which can be any type of visual or aural message as described hereinabove for a warning. Any combination of the above warnings and/or direction indicators and/or distance indicators can be used.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Feng in view of Chang in view of Andrai to add the teachings of Nir in order to display an indication of an item of interest being outside the field of view of a display. Regarding claim 2, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: wherein the at least one auxiliary camera comprises a single lens and imager. [See Feng, ¶ 040 discloses that the deep depth of field of auxiliary camera 204 may be provided by a lens assembly with a small aperture, a short focal length, and/or a small sensor size.] Regarding claim 4, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: wherein the instructions are executable to: control the at least one main camera at least in part by moving the at least one lens of the at least one main camera to capture a subject imaged by the at least one auxiliary camera. [See Feng, ¶ 077 discloses in step 930, the second camera is caused to focus on the second ROI in the second view (e.g., by focus adjustment module 152). In some embodiments, the focusing process may be conducted automatically. A distance between a lens assembly and an image sensor of the second camera can be adjusted to cause the second camera to focus on the second ROI (e.g., based on the determined location information of the second ROI in step 920).] Regarding claim 5, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: wherein the instructions are executable to: control the at least one main camera at least in part by presenting on at least one display an indication of where to aim the at least one main camera. [See Feng, ¶ 075 discloses for example, the identified plurality of ROIs may be presented on a graphical user interface (e.g., region 712 on the display of user device 708). A user input, such as a finger contact with a touch screen (e.g., indicated by hand 718 in FIG. 7B), an audio command, or an eye-gaze, may be detected to indicate a selection of the first ROI from the plurality of ROls (e.g., selection of icon corresponding to tree 704 from region 712) can be received (e.g., detected by user interface 124 on the display).] Regarding claim 14, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: wherein the at least one auxiliary camera comprises a single lens and imager. [See Feng, ¶ 040 discloses that the deep depth of field of auxiliary camera 204 may be provided by a lens assembly with a small aperture, a short focal length, and/or a small sensor size.] Regarding claim 16, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: comprising: at least one processor configured with instructions to: based at least in part on the first auxiliary image, [See Feng, ¶ 032 discloses one or more processors I 02 may be configured to produce outputs based on information received from main camera 104 and one or more auxiliary cameras 106. One or more processors 102 may be configured to receive information from one or more auxiliary cameras 106 and provide instructions to main camera 104.] establish the focal length of the at least one lens. [See Feng, ¶ 079 discloses in step 1010, a first region of interest (ROI) in a first view of a scene captured by a first camera ( e.g., auxiliary camera 106, 204, 208, 210, or 404) is determined ( e.g., by system 100 or system 120, such as an ROI determination module 150 of system 120). In some embodiments, the first ROI is determined based on first image data associated with the first view that is captured by and obtained from the first camera (e.g., by image obtaining and processing module 148 of system 120). The first camera may be configured to continuously capture the first view of the scene. The first camera may be associated with a first DOF; See Feng, ¶ 080 discloses in step 1020, a second camera (e.g., main camera 104,202,206, or 402) is caused to focus on a second ROI in a second view corresponding to the determined first ROI.] Regarding claim 17, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: wherein the at least one lens has a field of view (FOV) of less than three degrees and the at least one auxiliary camera produces images using a FOV of greater than three degrees. [See Feng, ¶ 072 discloses that the first camera has a first DOF, and the second camera has a second DOF smaller than the first DOF (e.g., DOF 306 of the main camera is smaller than DOF 308 of the auxiliary camera); See Feng, ¶ 067 discloses one or more modules of system 120 as discussed with reference to FIG. 1B) may instruct or otherwise control the auxiliary camera to adjust its FOV 714 (e.g., adjusting focal length to focus on or shifting its ROI to the selected object, e.g., a tree 704); See Feng, ¶ 043 discloses that in some embodiments, main camera 206 may support the replacement of lenses with different focal lengths, for example a wide angle lens (e.g., a short focal length and a wide FOV) and/or a telephoto lens (e.g., a long-focus lens) , auxiliary cameras 208 and 210 are used to increase the resolution of the view being captured without increasing the resolution of either of the individual cameras 208 and 210. Therefore, there may be several auxiliary cameras 208 and 210 having different focal lengths. Feng’s disclosure thus clearly discloses the use of a telephoto lens used in concert with an auxiliary camera, wherein it is repeatedly noted that main and auxiliary cameras may support lenses of differing focal lengths. One of ordinary skill would obviously understand a telephoto lens to have a narrower field of view than a wide-angle lens. As evidenced by paragraph 0006 of CN215227418U (Lu et al.), the field of view of a camera being less than three degrees (FOV<5 including fields of view less than 3) is selectable as would be routinely and conventionally understood by one of ordinary skill.] Regarding claim 18, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: wherein the instructions are executable to: control the at least one main camera at least in part by presenting on at least one display an indication of where to aim the at least one main camera. [See Feng, ¶ 075 discloses for example, the identified plurality of ROIs may be presented on a graphical user interface (e.g., region 712 on the display of user device 708). A user input, such as a finger contact with a touch screen (e.g., indicated by hand 718 in FIG. 7B), an audio command, or an eye-gaze, may be detected to indicate a selection of the first ROI from the plurality of ROls (e.g., selection of icon corresponding to tree 704 from region 712) can be received (e.g., detected by user interface 124 on the display).] Regarding claim 21, this claim recites analogous limitations to claim 1 in the form of “a method” rather than “an apparatus” and is therefore rejected on the same premise. Please see examiner’s earlier rejection of claim 1 for corresponding motivation statement. Further, claim 21 recites the following limitations which are not explicitly found from claim 1, but are addressed as follows: Nir discloses: wherein the visual indicator is an arrow or a marker providing location information about the subject of interest; [See Nir, Figs. 7a-7b, 8a-8b, 9a-9d, ¶ 0195-0203, 0210 discloses a visual warning/popup indicating the existence of an item of interest outside the field of view of a display. In this embodiment, the location of the popup indicates that the item of interest is in the lower left quadrant of the field of view of the fisheye lens (710). A popup can be in a fixed position, it can use an arrow or other directional symbol to indicate the direction of the item of interest with respect to the icon or with respect to a fixed position (such as the center of the field of view), it can use different weight or different color symbols to indicate a distance to the item of interest, or a text message indicating direction, distance or both. The text message can be on the warning, or it can form part of a separate warning, which can be any type of visual or aural message as described hereinabove for a warning. Any combination of the above warnings and/or direction indicators and/or distance indicators can be used.] The reasons to combine the cited prior art are applicable to those presented for previously rejected claim 1. Regarding claim 22, this claim recites analogous limitations to claim 1 in the form of “a non-transitory computer-readable medium” rather than “an apparatus” and is therefore rejected on the same premise. Please see examiner’s earlier rejection of claim 1 for corresponding motivation statement. Further, claim 21 recites the following limitations which are not explicitly found from claim 1, but are addressed as follows: Feng discloses A non-transitory computer-readable medium storing instructions that, upon execution, cause operations comprising: [See Feng, ¶ 005 discloses a non-transitory computer-readable medium with instructions that are executed by a processor.] Claim(s) 3, 9, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Feng in view of Chang in view of Andrai in view of Nir in view of Holzer et al. (WO 2019213392 A1) (hereinafter Holzer). Regarding claim 3, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng in view of Chang in view of Andrai in view of Nir does not appear to explicitly disclose: wherein the at least one auxiliary camera comprises a stereoscopic camera. However, Holzer discloses: wherein the at least one auxiliary camera comprises a stereoscopic camera. [See Holzer, ¶ 0082-0083 discloses generating multi-view interactive digital media, and that depth images may be used in generating said media. Depth images can include depth, 3D, or disparity image data streams, and the like, and can be captured by devices such as, but not limited to, stereo cameras.] It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Feng in view of Chang in view of Andrai in view of Nir to add the teachings of Holzer in order to generate multi-view interactive media through depth image data. Regarding claim 9, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Holzer discloses: comprising a computer simulation head-mounted display supporting the at least one main camera and the at least one auxiliary camera. [See Holzer, ¶ 0082, 0118, 0255 discloses a stereoscopic head-mounted display which provides separate images for each eye. Such separate images may be stereoscopic pairs of image frames, such as stereoscopic pair 2300, generated by method 2200. Each image in the stereoscopic pair may be projected to the user at one of screens 2501 or 2502. As depicted in FIG. 25B screen 2501 projects an image to the user’s left eye, while screen 2502 projects in image to the user’s right eye.] The reasons to combine the cited prior art are applicable to those presented for previously rejected claim 3. Regarding claim 15, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Holzer discloses: wherein the at least one auxiliary camera comprises a stereoscopic camera. [See Holzer, ¶ 0082-0083 discloses generating multi-view interactive digital media, and that depth images may be used in generating said media. Depth images can include depth, 3D, or disparity image data streams, and the like, and can be captured by devices such as, but not limited to, stereo cameras.] The reasons to combine the cited prior art are applicable to those presented for previously rejected claim 3. Regarding claim 20, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Holzer discloses: comprising a computer simulation head-mounted display supporting the at least one main camera and the at least one auxiliary camera. [See Holzer, ¶ 0082, 0118, 0255 discloses a stereoscopic head-mounted display which provides separate images for each eye. Such separate images may be stereoscopic pairs of image frames, such as stereoscopic pair 2300, generated by method 2200. Each image in the stereoscopic pair may be projected to the user at one of screens 2501 or 2502. As depicted in FIG. 25B screen 2501 projects an image to the user’s left eye, while screen 2502 projects in image to the user’s right eye.] The reasons to combine the cited prior art are applicable to those presented for previously rejected claim 3. Claim(s) 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Feng in view of Chang in view of Andrai in view of Nir in view of Wucher et al. (US 20230053026 A1) (hereinafter Wucher). Regarding claim 10, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng discloses: further comprising instructions for: establishing at least a focal length of the at least one lens based at least in part on the imaging; and [See Feng, ¶ 079 discloses in step 1010, a first region of interest (ROI) in a first view of a scene captured by a first camera ( e.g., auxiliary camera 106, 204, 208, 210, or 404) is determined ( e.g., by system 100 or system 120, such as an ROI determination module 150 of system 120). In some embodiments, the first ROI is determined based on first image data associated with the first view that is captured by and obtained from the first camera (e.g., by image obtaining and processing module 148 of system 120). The first camera may be configured to continuously capture the first view of the scene. The first camera may be associated with a first DOF; See Feng, ¶ 080 discloses in step 1020, a second camera (e.g., main camera 104,202,206, or 402) is caused to focus on a second ROI in a second view corresponding to the determined first ROI.] Andrai discloses: imaging the subject of interest using the at least one auxiliary camera; and [See Andrai, ¶ 0084 discloses that the camera unit (140) may include various types of cameras built into the HMD, such as at least one of a monocular camera, a binocular camera, or an infrared camera.] The reasons to combine the cited prior art are applicable to those presented for previously rejected claim 1. Feng in view of Chang in view of Andrai in view of Nir does not appear to explicitly disclose: presenting on at least one display an arrow indication of where to aim the at least one main camera. However, Wucher discloses: presenting on at least one display an arrow indication of where to aim the at least one main camera. [See Wucher, ¶ 0076 discloses that feedback presented on the display of the user device by the image capture application 125 may comprise symbols or other indicia. For example, the image capture application 125 may display arrows indicating that the user should reposition the user device 121 (e.g., aim the rear-facing camera 1602 a certain direction, move the rear-facing camera 1602 closer or further away from the user 120, etc.) or to move the user’s body or head a certain way (e.g., open their mouth wider, open their mouth or tilt their head to expose certain teeth, etc.). Different types of indicia may be displayed based on the instruction. For example, a first style of arrow may indicate a direction for the user to move their head, a second style of arrow may indicate a direction to move the user device 121 including the rear-facing camera 1602, an icon or illustration of an open-mouth may indicate that the user should open their mouth wider, etc. It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Feng in view of Chang in view of Andrai in view of Nir to add the teachings of Wucher in order to provide feedback on a display of a user device to indicate where a user should reposition or aim a camera of the device. (Wucher, ¶ 0076) Regarding claim 11, Feng in view of Chang in view of Andrai in view of Nir in view of Wucher discloses all the limitations of claim 10. Feng discloses: comprising: controlling the at least one main camera at least in part by moving the at least one lens to capture the subject of interest. [See Feng, ¶ 077 discloses in step 930, the second camera is caused to focus on the second ROI in the second view (e.g., by focus adjustment module 152). In some embodiments, the focusing process may be conducted automatically. A distance between a lens assembly and an image sensor of the second camera can be adjusted to cause the second camera to focus on the second ROI (e.g., based on the determined location information of the second ROI in step 920).] Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Feng in view of Chang in view of Andrai in view of Nir in view of Mccombe et al. (US 20220337744 A1) (hereinafter Mccombe) Regarding claim 19, Feng in view of Chang in view of Andrai in view of Nir discloses all the limitations of claim 1. Feng in view of Chang in view of Andrai in view of Nir does not appear to explicitly disclose: wherein the at least one auxiliary camera is mounted to a front portion of the hood of the at least one lens. However, Mccombe discloses: wherein the at least one auxiliary camera is mounted to a front portion of the hood of the at least one lens. [See Mccombe, annotated Fig. 4 illustrated below clearly shows auxiliary camera clusters 450 and 460 arranged/mounted to a front portion of the lens hood edge.] PNG media_image1.png 442 494 media_image1.png Greyscale It would have been obvious to the person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention disclosed by Feng in view of Chang in view of Andrai in view of Nir to add the teachings of Mccombe in order to enable various additional functionalities, such as computational photography, beyond those possible using a single image sensor, to provide camera systems, configurations and devices that utilize auxiliary image sensors in addition to a main image sensor. (Mccombe, ¶ 0016) Allowable Subject Matter Claim 13 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK E DEMOSKY whose telephone number is (571)272-8799. The examiner can normally be reached Monday - Friday 7-4 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on 5712727384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK E DEMOSKY/ Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Aug 29, 2022
Application Filed
Dec 22, 2023
Non-Final Rejection — §103
Mar 08, 2024
Response Filed
Jun 15, 2024
Non-Final Rejection — §103
Sep 17, 2024
Response Filed
Jan 08, 2025
Final Rejection — §103
Apr 14, 2025
Request for Continued Examination
Apr 18, 2025
Response after Non-Final Action
Apr 25, 2025
Non-Final Rejection — §103
Jul 30, 2025
Response Filed
Oct 22, 2025
Final Rejection — §103
Feb 24, 2026
Request for Continued Examination
Mar 10, 2026
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586178
GRADING COSMETIC APPEARANCE OF A TEST OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12579873
SECURITY CAMERA SYSTEM WITH MULTI-DIRECTIONAL MOUNT AND METHOD OF OPERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574515
QUANTIZATION MATRIX ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 10, 2026
Patent 12563235
CONFIGURABLE NAL AND SLICE CODE POINT MECHANISM FOR STREAM MERGING
2y 5m to grant Granted Feb 24, 2026
Patent 12556685
IMAGE ENCODING/DECODING METHOD AND APPARATUS, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
65%
Grant Probability
55%
With Interview (-9.7%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 377 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month