Prosecution Insights
Last updated: April 19, 2026
Application No. 18/427,185

SHOOTING METHOD, APPARATUS, ELECTRONIC DEVICE AND MEDIUM

Non-Final OA §102§103§112
Filed
Jan 30, 2024
Examiner
ISLAM, MEHRAZUL NMN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
29 granted / 50 resolved
-4.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
68.6%
+28.6% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 50 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from the Chinese Patent Application No. CN202310955316.5 filed on 07/31/2023. Information Disclosure Statement The information disclosure statement (“IDS”) filed on 12/04/2024 has been reviewed and the listed references have been considered. Drawings The 11-page drawings have been considered and placed on record in the file. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4, 11-13, 18 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, Claims 4, 12, 13, 18 and 19 recite “the second object”. There is insufficient antecedent basis for “a second object” in the claims. Additionally, claim 11 recites “the second marker”. There is insufficient antecedent basis for “a second marker” in the claims. Further, claims 4, 12, 13 and 17-19 recite “the region where the second object is located”. There is insufficient antecedent basis for “a region where a second object is located” in the claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5 and 7-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Han et al. (US 2024/0406536 A1). Regarding claim 1, Han teaches, A shooting method, comprising: (Han, ¶0004: “a shooting method, applied to an electronic device having a camera”) acquiring a plurality of first image frames in response to a first operation, (Han, ¶0004: “in response to the first operation… a first image is displayed”) the plurality of first image frames comprising a first object; (Han, ¶0004: “one or more shot objects in the first image”) displaying the plurality of first image frames and a first marker, (Han, ¶0004: “one or more marks are displayed on the first image, the one or more marks correspond to one or more shot objects”) wherein the first marker is located in a region where the first object is located, (Han, ¶0033: “displaying the one or more marks on the one or more objects in the first image”) and the first marker indicates that the region where the first marker is located in the plurality of first image frames is being used as a video capture region to generate a video; (Han, ¶0004: “displaying a second image with the first shot object as a protagonist in the first window; and saving an original video”) and generating a video from images of the video capture region in the plurality of first image frames. (Han, ¶0005: “the original video generated based on an image stream in the preview window”). Regarding claim 2, Han teaches, The shooting method of claim 1, further comprising: in response to a second operation, moving the first marker from the region where the first object is located to a region where a second object is located in the plurality of first image frames. (Han, ¶0090: “display a selection box on each object, for example, a selection box 121 corresponding to the person 1, a selection box 122 corresponding to the person 2, and a selection box 123 corresponding to the person 3. In this case, the user may determine a video protagonist by using the selection boxes”). Regarding claim 3, Han teaches, The shooting method of claim 1, further comprising: displaying a plurality of second image frames, the plurality of second image frames showing images of the region where the first marker is located in the plurality of first image frames. (Han, ¶0216: “display a close-up image of the protagonist in the small window, thereby implementing a function of displaying the close-up video in the small window”). Regarding claim 4, Han teaches, The shooting method of claim 3, further comprising: while moving the first marker from the region where the first object is located to the region where the second object is located in the plurality of first image frames, (Han, ¶0132: “after the protagonist is switched to the person 2, the terminal 100 may determine a group of smoothly moving image frames”) changing the images shown in the plurality of second image frames as the region where the first marker is located changes. (Han, ¶0132: “display the image frames in the small window 141, to implement non-jumping protagonist switching display”). Regarding claim 5, Han teaches, The shooting method of claim 1, comprising: in response to a third operation, adjusting a size of the first marker. (Han, ¶0134: “transpose control may be used to adjust a size of the window”). Regarding claim 7, Han teaches, The shooting method of claim 1, wherein, before displaying the plurality of first image frames and the first marker, the shooting method further comprises: displaying the plurality of first image frames and recognizing a preset object in the plurality of first image frames; (Han, ¶0033: “displaying one or more marks on the first image specifically includes: performing object identification on the first image collected by the camera”) and displaying a second marker, the second marker indicating a region where the preset object is located in the plurality of first image frames. (Han, ¶0207: “After CropRagion Width and CropRagionHeight are determined, with reference to a known middle point (P3) of the protagonist person, the terminal 100 may crop the raw image to obtain the protagonist-centered close-up image”). Regarding claim 8, Han teaches, The shooting method of claim 7, further comprising: changing the second marker indicating the region where the preset object with a highest priority is located in the plurality of first image frames to the first marker; (Han, ¶0131: “the terminal 100 may set the person 2 corresponding to the selection box 122 as a protagonist. Referring to a user interface 25 shown in FIG. 2E, after the person 2 is set as the protagonist, the terminal 100 may display a close-up image of the person 2 in the small window”) wherein the first marker is different from the second marker. (Han, ¶0099: “display an icon of another style, to indicate that the person 3 is selected as the protagonist. For distinguishing, for example, the selection box 123 in FIG. 1C changes to red or blue”). Regarding claim 9, Han teaches, The shooting method of claim 7, further comprising: in response to determining that the preset object is not recognized in the plurality of first image frames, (Han, ¶0262: “When protagonist focusing fails, the preview window 113 may continue to display the raw image”) displaying the first marker in a preset position of the plurality of first image frames. (Han, ¶0207: “terminal 100 may crop the raw image to obtain the protagonist-centered close-up image”). Regarding claim 10, Han teaches, The shooting method of claim 7, wherein, before displaying the plurality of first image frames and the first marker, the shooting method further comprises: in response to a sixth operation, canceling display of the second marker; (Han, ¶0142: “the user cancels the selected protagonist person 3 by using an operation of tapping the check box 142, to re-select a new protagonist from the identified objects”) and displaying the first marker based on a region of the plurality of first image frames where the sixth operation acts on. (Han, ¶0136: “the terminal 100 may display the small window 141 in the preview window 113 again based on a re-determined protagonist”). Regarding claim 11, Han teaches, The shooting method of claim 1, further comprising: in response to a seventh operation, canceling display of the first marker; (Han, ¶0142: “the terminal 100 may close the small window after canceling the previously selected protagonist”) and displaying the second marker, the second marker indicating the region where the preset object is located in the plurality of first image frames. (Han, ¶0137: “operation of selecting a protagonist by the user is detected again, the terminal 100 may re-generate a small window, display a close-up image of the protagonist, and record a new close-up video”). Regarding claim 12, Han teaches, The shooting method of claim 1, further comprising: acquiring a plurality of third image frames, the plurality of third image frames being sequentially connected with the plurality of first image frames, the plurality of third image frames comprising a plurality of first frames acquired over a first time duration, (Han, ¶0181: “the terminal 100 may position the protagonist in an image sequence collected by the camera, to implement protagonist tracking and generate a close-up video of the protagonist”) the plurality of first frames comprising the second object and not comprising the first object; (Han, ¶0174: “In a single-person scenario (there is only one person object in an image frame”) and moving the first marker from the region where the first object is located in the plurality of first image frames to the region where the second object is located in the plurality of first frames. (Han, ¶0196: “If the protagonist is the person 3, the close-up image of the person 3 expected to be displayed in the small window should be an image surrounded by a dashed-line box 62”). Regarding claim 13, Han teaches, The shooting method according to claim 12, wherein, the plurality of third image frames further comprise a plurality of second frames acquired over a second time duration, the plurality of second frames are sequentially connected with the plurality of first frames, (Han, ¶0100: “In response to the operations of starting shooting and ending shooting, the terminal 100 may save, as a video, an image frame sequence collected by the camera during the operation”) the plurality of second frames comprise the first object and the second object; the shooting method, further comprises: (Han, ¶0174: “in a multi-person scenario in which persons do not overlap, the object included in the ith frame of image can be better identified according to the foregoing method”) in response to determining that the first time duration is less than or equal to a preset time duration, moving the first marker from the region where the second object is located to the region where the first object is located in the plurality of second frames. (Han, ¶0008: “in response to the second operation, recording the close-up video based on the image displayed in the first window, where duration of the close-up video is less than that of the original video”). Regarding claim 14, Han teaches, The shooting method of claim 1, further comprising: making a ratio of a range of the first marker to an area of the plurality of first image frames greater than or equal to 1/2. (Han, ¶0210: “the terminal 100 may zoom in a close-up image with 540p and 960p in equal proportions to obtain a close-up image with 1080p and 1920p”). Regarding claim 15, Han teaches, An electronic device, comprising: a camera; (Han, ¶0004: “an electronic device having a camera”) a processor; and a memory, (Han, ¶0041: “one or more processors and one or more memories”) configured to store processor-executable instructions; wherein the processor is configured to: (Han, ¶0041: “one or more processors execute the computer instructions, the electronic device is enabled to perform the method”) acquire a plurality of first image frames in response to a first operation, (Han, ¶0004: “in response to the first operation… a first image is displayed”) the plurality of first image frames comprising a first object; (Han, ¶0004: “one or more shot objects in the first image”) display the plurality of first image frames and a first marker, (Han, ¶0004: “one or more marks are displayed on the first image, the one or more marks correspond to one or more shot objects”) wherein the first marker is located in a region where the first object is located, (Han, ¶0033: “displaying the one or more marks on the one or more objects in the first image”) and the first marker indicates that the region where the first marker is located in the plurality of first image frames is being used as a video capture region to generate a video; (Han, ¶0004: “displaying a second image with the first shot object as a protagonist in the first window; and saving an original video”) and generate a video from images of the video capture region in the plurality of first image frames. (Han, ¶0005: “the original video generated based on an image stream in the preview window”). Regarding claim 16, Han teaches, The electronic device according to claim 15, wherein the processor is further configured to: display a plurality of second image frames, the plurality of second image frames showing images of the region where the first marker is located in the plurality of first image frames. (Han, ¶0216: “display a close-up image of the protagonist in the small window, thereby implementing a function of displaying the close-up video in the small window”). Regarding claim 17, Han teaches, The electronic device according to claim 16, wherein the processor is further configured to: while moving the first marker from the region where the first object is located to the region where a second object is located in the plurality of first image frames, (Han, ¶0132: “after the protagonist is switched to the person 2, the terminal 100 may determine a group of smoothly moving image frames”) change the images shown in the plurality of second image frames as the region where the first marker is located changes. (Han, ¶0132: “display the image frames in the small window 141, to implement non-jumping protagonist switching display”). Regarding claim 18, Han teaches, The electronic device according to claim 15, wherein the processor is further configured to: acquire a plurality of third image frames, the plurality of third image frames be sequentially connected with the plurality of first image frames, the plurality of third image frames comprise a plurality of first frames acquired over a first time duration, (Han, ¶0181: “the terminal 100 may position the protagonist in an image sequence collected by the camera, to implement protagonist tracking and generate a close-up video of the protagonist”) the plurality of first frames comprise the second object and not comprise the first object; (Han, ¶0174: “In a single-person scenario (there is only one person object in an image frame”) and move the first marker from the region where the first object is located in the plurality of first image frames to the region where the second object is located in the plurality of first frames. (Han, ¶0196: “If the protagonist is the person 3, the close-up image of the person 3 expected to be displayed in the small window should be an image surrounded by a dashed-line box 62”). Regarding claim 19, Han teaches, The electronic device according to claim 18, wherein, the plurality of third image frames further comprise a plurality of second frames acquired over a second time duration, the plurality of second frames are sequentially connected with the plurality of first frames, (Han, ¶0100: “In response to the operations of starting shooting and ending shooting, the terminal 100 may save, as a video, an image frame sequence collected by the camera during the operation”) the plurality of second frames comprise the first object and the second object; wherein the processor is further configured to: (Han, ¶0174: “in a multi-person scenario in which persons do not overlap, the object included in the ith frame of image can be better identified according to the foregoing method”) in response to determining that the first time duration is less than or equal to a preset time duration, move the first marker from the region where the second object is located to the region where the first object is located in the plurality of second frames. (Han, ¶0008: “in response to the second operation, recording the close-up video based on the image displayed in the first window, where duration of the close-up video is less than that of the original video”). Regarding claim 20, Han teaches, A non-transitory computer-readable storage medium having stored thereon executable instructions, (Han, ¶0042: “a computer-readable storage medium, including instructions”) wherein when the executable instructions are executed by a processor, implement: (Han, ¶0042: “When the instructions are run on an electronic device, the electronic device is enabled to perform the method”) acquiring a plurality of first image frames in response to a first operation, (Han, ¶0004: “in response to the first operation… a first image is displayed”) the plurality of first image frames comprising a first object; (Han, ¶0004: “one or more shot objects in the first image”) displaying the plurality of first image frames and a first marker, (Han, ¶0004: “one or more marks are displayed on the first image, the one or more marks correspond to one or more shot objects”) wherein the first marker is located in a region where the first object is located, (Han, ¶0033: “displaying the one or more marks on the one or more objects in the first image”) and the first marker indicates that the region where the first marker is located in the plurality of first image frames is being used as a video capture region to generate a video; (Han, ¶0004: “displaying a second image with the first shot object as a protagonist in the first window; and saving an original video”) and generating a video from images of the video capture region in the plurality of first image frames. (Han, ¶0005: “the original video generated based on an image stream in the preview window”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 2024/0406536 A1) in view of Feng et al. (US 2023/0021016 A1). Regarding claim 6, Han teaches, The shooting method of claim 1, further comprising. However, Han does not explicitly teach, blurring other regions in the plurality of first image frames except the region where the first marker is located. In an analogous field of endeavor, Feng teaches blurring other regions in the plurality of first image frames except the region where the first marker is located. (Feng, ¶0037: “Based on the location of the ROI in the image frame… blurring a region of the image frame outside of the ROI”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Han using the teachings of Feng to introduce blurring. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of focusing on the desired object while shooting a video. Therefore, it would have been obvious to combine the analogous arts Han and Feng to obtain the invention in claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAZUL ISLAM/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602808
METHOD FOR INSPECTING AN OBJECT
2y 5m to grant Granted Apr 14, 2026
Patent 12592075
REMOTE SENSING FOR INTELLIGENT VEGETATION TRIM PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12579695
Method of Generating Target Image Data, Electrical Device and Non-Transitory Computer Readable Medium
2y 5m to grant Granted Mar 17, 2026
Patent 12524900
METHOD FOR IMPROVING ESTIMATION OF LEAF AREA INDEX IN EARLY GROWTH STAGE OF WHEAT BASED ON RED-EDGE BAND OF SENTINEL-2 SATELLITE IMAGE
2y 5m to grant Granted Jan 13, 2026
Patent 12489964
PATH PLANNING
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
86%
With Interview (+28.3%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 50 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month