Prosecution Insights
Last updated: April 19, 2026
Application No. 18/150,010

ELECTRONIC DEVICE HAVING A PLURALITY OF LENSES AND CONTROLLING METHOD THEREOF

Non-Final OA §103
Filed
Jan 04, 2023
Examiner
YILMAKASSAYE, SURAFEL
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
5 (Non-Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
2y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
17 granted / 34 resolved
-12.0% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
58.7%
+18.7% vs TC avg
§102
34.3%
-5.7% vs TC avg
§112
4.5%
-35.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Acknowledgements 2. Applicant’s arguments/remarks, filed , are acknowledged. Amended claims are acknowledged. Claims remain pending and have been examined. Response to Arguments 3. Applicant’s arguments, see pages 8-9, filed on 12/04/2025, with respect to the rejections of independent claims 1 and 14 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Takkar et al. (US 2023/0033548 A1; further referred to as Takkar). Applicant, with respect to claim 1 and 14, amends to add on a limitation to specify “…wherein as the movement speed of the at least one object increases, the designated region is determined to have a larger size, and as the movement speed of the at least one object decreases, the designated region is determined to have a smaller size…”. Accordingly, Takkar, in [0050-0055] teaches a system and method for computer vision task using a sequence of image frames; a computer vision module 200 receives a first frame and may output a result relative to a computer vision task which may relate to object tracking so to output a bounding box of a tracked object in the first frame. As part of module 200, a differential image computation submodule 204 is taught; wherein unit 204 receives a first and second frame of respective timing, wherein the second frame may be a previously captured frame (with respect to the first frame) or the second frame may be a future frame (with respect to the first frame), as such the first and second frame may be consecutive. A differential image between the first and second frame may be computed particular for a region of interest (ROI) within the first frame, employing a crop image functional block 206. Block 206 crops the first and second frame to a ROI; the ROI may be defined based on a bounding box which is predicted for an object of interest in the second frame, wherein object detection may be performed on the first frame to determine a bounding box for an object of interest. A ROI can be defined to be larger if an object of interest is expected to move at higher speed or the ROI can be defined to be smaller if an object of interest is expected to move at lower speeds. Therefore, Takkar teaches a ROI which may be defined based on a bounding box that changes size based on a tracked object that is in motion wherein a the ROI may be larger if the object is expected to move faster or the ROI may be smaller if the object is expected to move slower. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1, 5, 12-14, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1; further referred to as Lee) in view of Ion et al. (US 2022/0210363 A1; further referred to as Ion) and further view of Takkar et al. (US 2023/0033548 A1; further referred to as Takkar). 6. Regarding claim 1, an electronic device comprising: a plurality of cameras (…Lee, in [0043], teaches camera module 291, Fig. 2…); a display (…[0043], display 260, Fig. 2…); and at least one processor (…[0043], one or more processor 210, Fig. 2…) configured to: execute an application (…[0043], element 210 may load an instruction or data into a volatile memory and process the loaded instruction or data…) supporting image capturing using the plurality of cameras (wherein as stated in [0056] an application 370 may include a camera 376 (which is equivalent to camera module 291…), obtain a first image having a first field of view via a first camera among the plurality of cameras (…[0066], first camera 431 may obtain an image and may include a wide-angle lens, Fig.4…), display a preview using the first image on the display in a state where a magnification for the image capturing is configured to be a first magnification (…[0072, the processor may output a first image on the display from the first camera and the image may be a preview; further as stated in [0073], processor may determine whether the first image has a size satisfying a specified condition…), change the preview using the first image, displayed on the display, to include at least one object identified in the first image according to a movement of the at least one object (…[0075] states the processor may analyze the first image to detect a moving subject, given a satisfied condition determined algorithmically. [0072] teaches that the first image may be a preview image. Further, [0076] teaches that a second image may be output on the display if an object included in the first image satisfies a specified condition (using a second camera). The second image may be a preview…). Lee does not further teach an electronic device wherein the processor is configured to: determine a size of a designated region of the first field of view, based on a movement speed of the at least one object identified in the first image (…However, Ion teaches an image sensor of pixels forming region of interest wherein Ion, in [0027], teaches that one or more full-resolution images may be used to determine a speed of an object of interest. When this speed crosses a threshold speed (e.g., exceeds the threshold speed or falls below the threshold speed), the ROI containing the object may be selected and used to acquire a plurality of ROI images of the object. [0065] further teaches that ROIs may be reconfigurable (number of ROIs, position of each ROI, and the shape of each ROI may be reconfigurable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the teachings of Ion a speed defined condition for a basis to reconfigure the ROI thereby having a dynamic means to determine the number of ROIs in an image…). Further, Lee in view of Ion does not teach wherein as the movement speed of the at least one object increases, the designated region is determined to have a larger size, and as the movement speed of the at least one object decreases, the designated region is determined to have a smaller size (…however, Takkar, in [0050-0055], teaches a ROI which may be defined based on a bounding box that changes size based on a tracked object that is in motion wherein a the ROI may be larger if the object is expected to move faster or the ROI may be smaller if the object is expected to move slower. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, that a ROI as taught by Takkar can be implemented in the similarly taught ROI determination as taught by Ion, so to account for the dynamics that a moving object may present in being determined to be bounded within a region of interest, thereby appropriately present the object without being clipped or without including additional undesired image information…). Lee further teaches the electronic device wherein the processor is configured to: change the magnification for the image capturing to a second magnification, based on a center point of the at least one object in the first image (…[0077] teaches that the processor 450 may crop an image obtained by the second camera to obtain the second image, wherein the second camera may have a narrower field of view than the first camera. The processor may crop the acquired image such that the area recognized as the face or a moving subject is located in the center; by making the area recognized as the face or moving subject a center in the cropped image, it is apparent that Lee also takes into account what a center of the area is in the image captured using the first camera, thus to make it a center in the cropped image …) and the at least one object being positioned in the designated region of the first field of view (…[0075] states the processor may analyze the first image to detect a moving subject, given a satisfied condition determined algorithmically. The processor may recognize the size of the area recognized as the moving subject. Further, [0076] teaches that the processor may zoom in on the identified object by using a second camera. As stated in [0059] of the instant application (“a region on which an object is positioned and by which the camera switching determination module 230 determines to perform camera switching may be referred to as a designated region”…), obtain a second image having a second field of view via a second camera among the plurality of cameras (…[0076], a processor 450 may output a second image obtained by using a second camera, wherein the processor may zoom in on an identified object using the second camera. [0077] further teaches that the second camera may have a narrower field of view that the first camera…), the second magnification being determined based on the first field of view, the second field of view, and a position of the at least one object in the first field of view (…[0075], the processor may determine whether an area recognized as a moving subject and included in the first image has a size satisfying a specified condition [0076], the processor may output the second image obtained by using the second camera 432 on the display 420 if an object included in the first image satisfies a specified condition. If the specified condition is satisfied, the processor may obtain the second image by using the second camera. The processor may zoom in on the identified object by using the second camera…), and display at least a part of the second image on the display as the preview, in a state in which the magnification for the image capturing is configured to be the second magnification (…[0076] the processor 450 may output the second image obtained by using the second camera on the display. The processor may zoom in on the identified object to obtain the second image. The second image may be a preview image…). 7. Regarding claim 5, Lee in view of Ion and further view of Takkar teaches the electronic device of claim 1 (see claim 1 above), wherein the at least one processor is further configured to: obtain category information on the at least one object identified in the first image (…[0122] states, the electronic device may recognize a face and an area within the outline of the face and may recognize a rectangular area including the face. The electronic device may also recognize an area including a plurality of faces…), detect the at least one object by using the second camera, based on the obtained category information (…[0123] states, the electronic device may determine whether the size of the area recognized as the face satisfies a specified condition. Further, [0124] teaches, if the specified condition is satisfied, the electronic device may actuate the second camera. The electronic device may zoom in on the area recognized as the face by using an optical zoom function of the second camera…), and track the detected at least one object by using the second camera (…[0125] further teaches that the electronic device may output, on the display, a preview image of a picture to be taken by the second camera or a moving image being taken by the second camera. Thereby teaching an ability to track an object that has been recognized previously as being a face (e.g.)…). 8. Regarding claim 12, Lee in view of Ion and further view of Takkar teaches the electronic device of claim 1 (see claim 1 above), wherein, in a case in which the first field of view is greater than the second field of view, the at least one processor is further configured to determine, as the designated region of the first field of view, a region overlapping with or smaller than the second field of view (…[0077] teaches that the processor may crop an image obtained by a second camera having a narrower field of view than the first camera such that the area recognized as an object/or moving subject is located in the center. As explained in [0076] this is resultant from a zooming-in process undergone using a second camera…). 9. Regarding claim 13, Lee in view of Ion and further view of Takkar teaches the electronic device of claim 1 (see claim 1 above),wherein each of the first camera and the second camera is one of a tele camera, a wide camera, or an ultra-wide camera (…[0066] teaches that the first camera may include a wide- angle lens. Wherein as taught in [0067] the second camera may include, e.g., a telephoto lens…). 10. Regarding claim 14, claim 14 is rejected for reasons discussed related to claim 1 (see claim 1 above). 11. Regarding claim 18, claim 18 is rejected for reasons discussed related to claim 5 (see claim 5 above). 12. Claim 2 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1) in view of Ion et al. (US 2022/0210363 A1) and Takkar et al. (US 2023/0033548 A1) and further view of Nakamura (US 2019/0128667 A1). 13. Regarding claim 2, Lee in view of Ion and Takkar teaches the electronic device of claim 1 (see claim 1 above), wherein the at least one processor is further configured to: switch from the obtaining of the first image via the first camera to the obtaining of the second image via the second camera, a determination that the at least one object is positioned in the designated region of the first field of view for a designated time or longer (…Lee teaches in [0076] the processor may output the second image obtained by using the second camera on the display if an object included in the first image satisfies a specified condition...). Lee doesn’t specify the condition to be as such wherein the switching of obtaining a second image based on the determination that the at least one object is positioned in the designated region of the first field of view for a designated time or longer (…however, Nakamura teaches an object monitoring apparatus wherein according to [0027] a first sensor 12 - 1 and the second sensor 12 - 2 has a configuration of a ranging sensor that three-dimensionally measures a distance and a direction, from the sensor to an object present in a spatial area. As such, a ranging sensor that can be employed in the teaching is, for example, a stereoscope-type measurement device using two image sensing devices (e.g., CCD cameras). Further, [0047] teaches a monitoring operation using sensors 12-1 and 12-2, involving judging units 14-1 and 14-2 which produce results indicative of the presence or absence of an object within a monitored area A or B. In the process of the monitoring operation, each of the first and second sensors 12 - 1, 12 - 2 measures a target with a predetermined period T, thus to produce the presence or absence of an object, as judged by the units employed to do so. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, that a determination to judge the presence or absence of an object within a specified monitored area, as such by a camera, can be specified to be conditional within the measuring of a target with a predetermined period T as taught by Nakamura; thus obtain a second image obtained by using a second camera to be displayed, if an object included in the first image satisfies a specified condition wherein an object is positioned in a designated region of a first field of view for a designated time as taught by Lee…). 14. Regarding claim 15, claim 15 is rejected for reasons related to claim 2 (see claim 2 above). 15. Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1) in view of Ion et al. (US 2022/0210363 A1) and Takkar et al. (US 2023/0033548 A1) and further view of Nakamura (US 2019/0128667 A1) and Yim (US 2020/0267326 A1). 16. Regarding claim 3, Lee in view of Ion and Takkar and further view of Nakamura teaches the electronic device of claim 2 (see claim 2 above), wherein the at least one processor is further configured to: activate the second camera before the switch from the obtaining of the first image via the first camera to the obtaining of the second image via the second camera (…though Lee teaches in [0130-0132] that an electronic device may track a moving subject in an image. Given a specified condition the electronic device may actuate a second camera and may obtain a zoomed-in image as the second image, Lee doesn’t specify a specific operation of activation. However, Yim teaches an electronic device of multiple cameras wherein as taught in [0096] the electronic device may activate a second camera distinct from a first camera in response of a first input, supporting a magnification that is not included in the range of the first camera. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the camera activation process as taught by Yim, to the teachings of Lee, thus to have the means to activate a device only when necessary, thus have the means to conserve power consumption…). Lee further teaches to: track the at least one object via the activated second camera (…stated in [0076], a processor may automatically track recognized area of a moving subject in a second image using the second camera, given a specified condition is met within a first image of a first camera…). 17. Regarding claim 16, claim 16 is rejected for reasons related to claim 3 (see claim 3 above). 18. Claims 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1) in view of Ion et al. (US 2022/0210363 A1) and Takkar et al. (US 2023/0033548 A1) and Nakamura (US 2019/0128667 A1) and further view of Yoshida (US 2014/0185870 A1). 19. Regarding claim 4, Lee in view of Ion and Takkar and further view of Nakamura teaches the electronic device of claim 2 (see claim 2 above), wherein the at least one processor is further configured to: after the switch from the obtaining of the first image via the first camera to the obtaining of the second image via the second camera, display, on the display as the preview, the at least the part of the second image, in which a blur effect is applied to at least a partial region among a region remaining after excluding the at least one object. Lee teaches the first part of claim 4’s limitation, as previously described. However, Lee doesn’t further teach a process “in which a blur effect is applied to at least a partial region among a region remaining after excluding the at least one object” (…however, Yoshida, in [0007], teaches an image processing apparatus including an acquisition unit to acquire image data, a detection unit configured to detect objects from image data, a selection unit configured to select a main object from detected objects, an object distance information acquisition unit configured to acquire information regarding distances to detected objects and divide the image data into small areas and acquire object distance information for each of the small areas, a distance grouping unit configured to, based on the distance information, group the small areas into distance groups defined by different distance ranges, a determination unit configured to determine a blurring level for each of the distance groups, and a blurring processing unit configured to perform blurring processing on the image data based on the blurring level. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, that a blurring effect according to the teachings of Yoshida could have been implemented in the teachings of Lee and the combined references as specified above, thus to apply on a second image, a blurring effect to the image in regions wherein a region of the object of the image may not be blurred and thus stand out in respect to the blurred region(s)…). 20. Regarding claim 17, claim 17 is rejected for reasons related to claim 4 (see claim 4 above). 21. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1) in view of Ion et al. (US 2022/0210363 A1) and Takkar et al. (US 2023/0033548 A1) and further view of Wu et al. (US 2016/0203584 A1; further referred to as Wu). 22. Regarding claim 6, Lee in view of Ion and Takkar teaches the electronic device of claim 1 (see claim 1 above), further comprising: a memory (…[0036], memory 130, Fig. 1…), wherein the at least one processor is further configured to: determine a first region of interest including the at least one object (…[0074] teaches, the processor may analyze the first image to detect an area in which a person's face is included in the first image…), and store, in the memory, first information including information on a position of the at least one object in the first region of interest (…[0037] teaches that the memory 130 may include a volatile and/or nonvolatile memory. The memory 130 may store instructions or data associated with at least one other element(s) of the electronic device 101. [0074] teaches that the processor may determine whether the absolute or relative size of the area recognized as the face satisfies a specified condition. For example, when the first image has a size of 1440×1080, the processor may determine that the specified condition is satisfied, if the area recognized as the face has a size of 180×180 or more, or is about 2% or more of the entire size of the first image (thus, teaching that the processor has memorized information relative to the size/position of an object within the image…). Lee, however, doesn’t further teach: a ratio of a size of the at least one object to the first region of interest. Wu, however, teaches a method employed in a system capable of varying scaling ratios according to different regions of an image. Therein, Wu teaches: ratio of a size of the at least one object to the first region of interest (…wherein Wu teaches in [0022-0024]], as imaged in Fig. 3 a first ROI(C1), wherein R1 is set as a center of C1. Further, a first bridging region C2 is also set. [0008] states an image processing system is applied to execute an image adjusting method of setting a first ROI (region of interest) circle on the image, acquiring a center of the first ROI circle to be a first reference point, setting at least one first bridging region circle by the first reference point, and utilizing a first scaling ratio, a second scaling ratio and a third scaling ratio to respectively adjust pixels inside the first ROI circle. Thus, an image adjusting method is taught of varying scaling ratios according to different regions of an image. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to have used a method as such taught by Wu, wherein identified regions within an image are respectively adjusted by different scaling ratios, so to have the further capabilities of determining other respective relationships that may be derived in the system as taught by Lee wherein an object within a ROI may be tracked…). 23. Regarding claim 19, claim 19 is rejected for reasons related to claim 6 (see claim 6 above). 24. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1) in view of Ion et al. (US 2022/0210363 A1) and Takkar et al. (US 2023/0033548 A1) and further view of Wu et al. (US 2016/0203584 A1) and further view of Nakamura (US 2019/0128667 A1). 25. Regarding claim 7, Lee in view of the combined references as cited in claim 6 above teaches the electronic device of claim 6. Though Lee teaches a size recognition of a particular area within an image, Lee doesn’t specify wherein the at least one processor is further configured to: determine the first region of interest, based on coordinates of the at least one object. However, Nakamura teaches an object monitoring apparatus wherein a judgement unit is configured to: determine the first region of interest, based on coordinates of the at least one object (…wherein [0035] teaches a first judging unit measurement data, position and shape of a monitored area and executes processing such as a three-dimensional coordinate transformation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the processing capability as such as taught by Nakamura, thus to determine the location of a region of interest within an image thus to have additional means to determine additional editorial capabilities regarding regions which may be otherwise from a region of a particular interest…). 26. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2018/0196472 A1) in view of Ion et al. (US 2022/0210363 A1) and Takkar et al. (US 2023/0033548 A1) and further view of Wu et al. (US 2016/0203584 A1) and view of Koizumu (US 2005/0254686 A1). 27. Regarding claim 8, Lee in view of the combined references as specified teaches the electronic device of claim 6 . Though Lee teaches an electronic device with the ability to track a moving subject included in a first image (e.g., in [0130]), Lee does not specify an electronic device wherein the at least one processor is further configured to: update the first information according to a change in the position of the at least one object in the first region of interest, or the ratio of the size of the at least one object to the first region of interest (…however, Koizumi teaches a device and method for tracking an object, including a program for tracking an object enabling correct association between an object and an object zone. Therein the method configures to: update the first information according to a change in the position of the at least one object in the first region of interest, or the ratio of the size of the at least one object to the first region of interest (…[0053] teaches that the object-tracking method extracts zone characteristic quantities from image information the object-zone information and the first zone- correspondence information to provide the zone characteristic-quantity information, and updates the stored object characteristic quantities on the basis of the zone characteristic-quantity information, the first zone-correspondence information or the correspondence information that has been determined and the object characteristic quantities generated prior to the present; [0013] states the object-tracking device is provided with an object-zone extracting means for extracting the object zones from the image information and providing the object-zone information that includes the image information about the object zones, a state-of-tracking deciding means for deciding the states of tracking of individual objects or object zones, wherein the state-of-tracking means relative positions of each object with respect to other objects; further, [0016] states, zone characteristic quantities include at least one of or one of combinations of…, area, image template and color histogram normalized with respect to the area, of the object zone, and finds an object zone corresponding to the object of interest from the first zone-correspondence information and generates at least one or one of combinations of …area, image template and color histogram normalized with respect to the area of the object zone as an object characteristic quantity. Further, Lee, in [0074] discusses an example of a condition specifying a region’s size as a percentage of an entire it is part of (a ratio). Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention that updating a first information according to changes of an object within a region provides a means to develop a realistic tracking, relative to an object’s changing position within an area of imaging…). Allowable Subject Matter 28. Claim 9, 11, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 29. Regarding claim 9, Lee and the combined references as cited teaches the electronic device of claim 6, wherein the at least one processor is further configured to: determine a second region of interest including the at least one object, based on the first information stored in the memory (…[0094] teaches that the electronic device may obtain a first image and analyze a first face, second, third etc.…wherein the electronic device may classify the plurality of faces into one or more groups, thus the electronic device may recognize a first area including a first group and a second area including a second group. Further, [0037] teaches that element 130 may store instructions or data associated with at least one other element(s) of the electronic device 101…), wherein the position of the at least one object in the first region of interest, and the ratio of the size of the at least one object to the first region of interest correspond to a position of the at least one object in the second region of interest (…[0074] teach that a particular image size or a percentage of an object’s size relative to the entire image it is a part off is a function the processor may perform algorithmically. As such, it is evident from the teachings of [0094] that this may be performed for numerous objects and the regions of their location within an image. According the teachings of [0094] groupings of objects can be determined to be including objects of interest, as such a plurality of objects may be classified into one or more groups based on distances between the objects (faces)…), and a ratio of a size of the at least one object to the second region of interest (…as taught in [0074], the processor may determine whether an absolute or relative size of an area recognized as a face satisfies a specified condition. An object’s size or the ratio of its size relative to the entire size of the frame can be determined…). Lee doesn’t further teach a processor configured to: deactivate the first camera based on the second region of interest being determined. However, Lee ‘804 teaches an electronic device and method for controlling activation of a camera module, a system for deciding which camera to use for iris authentication, wherein a processor may determine to activate or deactivate a first or second camera to obtain an image of an object that may be at a particular distance from the electronic device. Therein, Lee ‘804 teaches [0030] teaches a processor may calculate a distance between an object and an electronic device using a second camera, determine whether the calculated distance is greater than or equal to a second distance, and activate the first camera to obtain an iris image when the calculated distance is greater than or equal to the second distance. The processor may deactivate the first camera when the calculated distance is less than the second distance. However, Lee ‘804 doesn’t particularly specify to deactivate the first camera based on the second region of interest being determined. 30. Regarding claim 20, claim 20 is objected to for reasons discussed related to claim 9. 31. Regarding claim 11, Lee teaches and the combined references as cited in claim 1 teach the electronic device of claim, however the combined references do not further teach: wherein, in a case in which the first field of view is smaller than the second field of view, the at least one processor is further configured to determine, as the designated region of the first field of view, a region adjacent to a border of the first field of view. However, Hong et al. (US 2021/0306535 A1) teaches a dual aperture camera, and presents in Fig. 3, as described in [0027-0030] a first field of view 312 (smaller than a second field of view) and a second field of view 314. Positioned at R0 is an object of interest. Wherein field of view 312 may be represented by image 200A of Fig. 2 and field of view 314 by image 200B of Fig. 2. Though an object in image 200A may be self-identified to be adjacent to a border of field of view 312, the teaching doesn’t specifically disclose this to be a purposed function determined to be identified for a particular reason. Conclusion 32. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SURAFEL YILMAKASSAYE whose telephone number is (703)756-1910. The examiner can normally be reached Monday-Friday 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571)272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SURAFEL YILMAKASSAYE/Examiner, Art Unit 2639 /TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

Jan 04, 2023
Application Filed
Aug 23, 2024
Non-Final Rejection — §103
Sep 17, 2024
Interview Requested
Oct 17, 2024
Applicant Interview (Telephonic)
Oct 17, 2024
Examiner Interview Summary
Nov 14, 2024
Response Filed
Feb 04, 2025
Final Rejection — §103
Apr 01, 2025
Request for Continued Examination
Apr 02, 2025
Response after Non-Final Action
Apr 08, 2025
Non-Final Rejection — §103
Jul 08, 2025
Response Filed
Sep 04, 2025
Final Rejection — §103
Nov 13, 2025
Interview Requested
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 04, 2025
Request for Continued Examination
Dec 19, 2025
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12538047
Ambient Light Sensing with Image Sensor
2y 5m to grant Granted Jan 27, 2026
Patent 12506981
PHOTOELECTRIC CONVERSION APPARATUS, METHOD FOR CONTROLLING PHOTOELECTRIC CONVERSION APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12495224
IMAGE SENSING DEVICE AND IMAGE PROCESSING METHOD OF THE SAME
2y 5m to grant Granted Dec 09, 2025
Patent 12470797
OPTICAL ELEMENT DRIVING MECHANISM
2y 5m to grant Granted Nov 11, 2025
Patent 12452534
CONTROL APPARATUS, LENS APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PICKUP SYSTEM, CONTROL METHOD, AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
84%
With Interview (+33.6%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month