Prosecution Insights
Last updated: April 19, 2026
Application No. 18/693,089

RECORDING METHOD AND APPARATUS, AND STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
Mar 18, 2024
Examiner
PETERSON, CHRISTOPHER K
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Honor Device Co., Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
636 granted / 813 resolved
+16.2% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
23 currently pending
Career history
836
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
30.3%
-9.7% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 813 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 8/1/2024 and 11/18/2024 were filed after the mailing date of the application on 3/18/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 12 and 13 recite the limitation "the second button" in line 2. There is insufficient antecedent basis for this limitation in the claim. Examiner will analyze the Limitation as “a second button”. Claims 16 recite the limitation "the fourth button" in line 11. There is insufficient antecedent basis for this limitation in the claim. Examiner will analyze the Limitation as “a fourth button”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 23 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention lacks patentable utility. Claim 23 cites a computer-readable storage medium. Applicant’s Specification teaches the computer-readable medium may include a computer storage medium and a communication medium, and may further include any medium that can transfer the computer program from one place to another and computer-readable medium may include a RAM, a ROM, a compact disc read-only memory (compact disc read-only memory, CD-ROM) or another optical disk memory, a magnetic disk memory or another magnetic storage device, or any other medium that is used to carry required program code in a form of instructions or a data structure and that can be accessed by a computer (Para 328 and 329). If the specification includes written description support, this rejection could be overcome by claiming the invention as being stored in a nontransitory computer readable medium; however, see MPEP 2111.05 for a discussion of functional and nonfunctional descriptive material as related to computer readable media. Claim interpretation affects the evaluation of both criteria for eligibility. For example, in Mentor Graphics v. EVE-USA, Inc., 851 F.3d 1275, 112 USPQ2d 1120 (Fed. Cir. 2017), claim interpretation was crucial to the court’s determination that claims to a "machine-readable medium" were not to a statutory category. In Mentor Graphics, the court interpreted the claims in light of the specification, which expressly defined the medium as encompassing "any data storage device" including random-access memory and carrier waves. Although random-access memory and magnetic tape are statutory media, carrier waves are not because they are signals similar to the transitory, propagating signals held to be non-statutory in Nuijten. 851 F.3d at 1294, 112 USPQ2d at 1133 (citing In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007)). Accordingly, because the BRI of the claims covered both subject matter that falls within a statutory category (the random-access memory), as well as subject matter that does not (the carrier waves), the claims as a whole were not to a statutory category and thus failed the first criterion for eligibility MPEP 2106(II). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-8, 10-14, and 16-23 are rejected under 35 U.S.C. 103 as being unpatentable over Kang (US Patent Pub. # 2016/0255268) in view of Li (US Patent Pub. # 2022/0159183). As to claim 1, Kang (Figs. 1-3) discloses a recording method, applied to a terminal device (mobile terminal 100) comprising a first camera (camera 121b), and the method comprising: displaying (display unit 151) (step S210), by the terminal device (100), a first interface of a camera application (image capturing function), wherein the first interface comprises a first window (preview image 310) and a second window (pop-up window 500); and the first window (310) displays a first picture (310) collected by the first camera (121b), the second window (500) displays a second picture (preview image 410) and a first button (Fig. 3(c), cross button on the top right corner of the pop-up window 500), and the second picture (410) is a part of the first picture (310) (Para 145, 149, 152, and 160); at a first moment, when the terminal device (100) detects that a first position (selected portion 400) of the first picture (310) comprises a first object (bird), comprising the first object (bird) in the second picture (410) (Para 152); at a second moment, when the terminal device (100) detects that a second position (magnify or reduce the preview image 410) of the first picture (310) comprises the first object (bird), comprising the first object (bird) in the second picture (410) (Para 155 and 156); displaying, by the terminal device (100), a third window (select a different portion 400) on the first window (310), wherein the third window (select a different portion 400) comprises a third picture (selected object), the third picture (selected object) is a part of the first picture (310), and a position of the third window (410) in the first window (500) is the same as a position (410) of the second window (500) in the first window (310) (Para 155 and 156). Kang implies at a third moment, stopping (implied by touching the cross button on the top right corner of the pop-up window 500), by the terminal device (100) in response to a trigger operation (press the cross button on the top right corner of the pop-up window 500) of a user on the first button (Fig. 3(c), cross button on the top right corner of the pop-up window 500), displaying the second window (410), and continuing to display the first window (310) , wherein the third moment is later than the second moment, and the second moment is later than the first moment (Para 160-173). Kang does not teach touching the cross button on the top right corner of the pop-up window. Li teaches the user taps a “x” control 1701, the mobile phone may display only an image in an area 1702, and stop displaying an image in an area 1703 (Para 219). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have provided a “x” control 1701 as taught by Li to the cross button on the top right corner of the pop-up window 500 of Kang, to improving visual experience of the user (Para 221 of Li). As to claim 2, Kang teaches wherein the first object (bird) is displayed in the center in the second picture (410) (Para 152). As to claim 3, Kang teaches wherein the second window (500) is floated on an upper layer of the first window (310), and the second window (500) is smaller than the first window (310) (Para 152 and 153). As to claim 5, Kang teaches wherein the displaying, by the terminal device (100), a third window (500) on the first window (310) comprises: displaying, by the terminal device (100), the third window (500) on the first window (310) when the terminal device (100) detects a trigger operation (select portion 400) of the user (user) on a first tracking identifier (tracking function) of the first object (bird) in the first window (310); or displaying, by the terminal device(100), the third window (500) on the first window (310) when the terminal device (100) detects a trigger operation (select portion 400) of the user (user) on a first tracking identifier (tracking function) of any object (select portion 400) in the first window (310) (Para 150 -154 and 248). As to claim 6, Kang teaches wherein the displaying, by the terminal device (100), a third window (500) on the first window (310) comprises: displaying, by the terminal device (100), the third window (500) on the first window (310) when the terminal device (100) detects that a duration that the second window (500 deleted) is not displayed (deleted) is within a first time limit and/or a duration that the terminal device (100) does not use a recording function is within a second time limit (Kang teaches that the second is deleted when the cross button is pressed. Therefore a time period or a duration of time can pass and the second window (500) is not displayed), wherein the position of the third window (500) in the first window (310) is the same as the position of the second window (500) in the first window (310) (Para 155 and 156). As to claim 7, Kang teaches wherein when the terminal device (100) displays the third window (500) on the first window (310), a horizontal and vertical screen state (fixed position) of the third window (500) is the same as a horizontal and vertical screen state (fixed position) of the second window (500), and/or a size (magnify or reduce) of the third window (500) remains the same as a size of the second window (500) (Para 153 and 155). As to claim 8, Kang (Fig. 5D) teaches wherein the method further comprises: detecting, by the terminal device (100), a scale or enlarge operation (magnified (scaled up)) on the second window (500); and scaling or enlarging the size (magnified (scaled up)) of the second window (500) by the terminal device (100) in response to the scale or enlarge operation (drag touch) on the second window (500); wherein when the terminal device (100) scales or enlarges the size (magnified (scaled up)) of the second window (500), content in the second picture remains unchanged, or content in the second picture (410) is in direct proportion to the size of the second window (500) (Para 191). As to claim 10, Li teaches wherein the second window (image in the area 1703) further comprises a second button (recording button), and the terminal device (electronic device 100) further hides the first button (1701) and the second button (recording button) in the second window (1703) in response to the scale or enlarge operation (drag a boundary line) on the second window (1703) (Para 188, 219, and 221). Li teaches the mobile phone may further hide some controls in the preview interface in the multi-channel video recording mode, to avoid as much as possible that a preview image is blocked by the controls, thereby improving visual experience of the user (Para 221). As to claim 11, Kang teaches wherein the method further comprises: detecting, by the terminal device (100), a move operation (drag touch 610) on the second window (500); and moving (drag), by the terminal device in response to the move operation (drag touch 610) on the second window (500), the second window (500) along a movement track (drag) of the move operation (610) (Para 217 and 218). As to claim 12, Li teaches wherein the terminal device (100) further hides the first button (1701) and a second button (recording button) in the second window (1703) in response to the move operation (drag a boundary line) on the second window (1703) (Para 221). Li teaches the mobile phone may further hide some controls in the preview interface in the multi-channel video recording mode, to avoid as much as possible that a preview image is blocked by the controls, thereby improving visual experience of the user (Para 221). As to claim 13, Kang (Fig. 5D) teaches wherein the second window (500) in the first interface further comprises a second button (touch and drag), and the method further comprises: switching, in response to a trigger operation (touch and drag) of the user on the second button (touch and drag), a display state of the second window (500) in the first interface, wherein a window aspect ratio (reduce or magnify a size) displayed in a display state before the switching of the second window (500) is different from a window aspect ratio (reduce or magnify a size) displayed in a display state after (drag completed) the switching (Para 181-184). As to claim 14, Li (Fig. 5D) teaches wherein the first window (310) in the first interface further comprises a third button (setting control 701), and the method comprises: adding and displaying, in response to a trigger operation (tapping) of the user on the third button (701), a fourth button (700) in the first window in the first interface; and switching, in response to a trigger operation of the user on the fourth button (700), the second window in the first interface from a display state corresponding to the third button to a display state corresponding to the fourth button, wherein a window aspect ratio (display style) displayed in the display state corresponding to the third button (701) of the second window is different from a window aspect ratio displayed in the display state corresponding to the fourth button (700)(Para 181-183). Li teaches after detecting an operation of tapping the control 701 by the user, the mobile phone may display a setting interface shown in FIG. 7(b). The user may set or change to, in the setting interface, N camera lenses used by the mobile phone during multi-channel video recording (Para 182). As to claim 16, Kang teaches wherein before the displaying, by the terminal device(100), a first interface of a camera application, the method further comprises: displaying, by the terminal device (100), a second interface of the camera application, wherein the second interface comprises the first window (310) and the second window (500), the first window (310) displays a fourth picture (updated preview image) collected by the first camera (121) and a fifth button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched), and the second window (500) displays a part of the fourth picture (image selected); and detecting, by the terminal device (100), a trigger operation of the user on the fifth button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched); and the displaying, by the terminal device, a first interface of a camera application comprises: displaying the first interface by the terminal device (100) in response to a trigger operation of the user on a fourth button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched), wherein the first window further comprises first recording duration information of the first window (video or image capture mode), the second window (500) further comprises second recording duration information (video or image capture mode) of the second window (500), and the second recording duration information (video or image capture mode) of the second window (500) is the same as the first recording duration information (video or image capture mode) of the first window (310) (Para 70, 145, and 227-236). As to claim 17, Kang (Figs. 3) teaches wherein before the displaying, by the terminal device (100), a first interface of a camera application, the method further comprises: displaying, by the terminal device (100), a third interface of the camera application, wherein the third interface comprises the first window (310) but does not comprise the second window (500), and the first window displays a fifth picture collected by the first camera and a fifth button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched); detecting, by the terminal device (100), a trigger operation of the user on the fifth button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched); displaying, by the terminal device (100) in response to the trigger operation of the user on the fifth button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched), a fourth interface of the camera application, wherein the fourth interface comprises the first window (310), and the first window displays a sixth picture recorded by the first camera (121) and first recording duration information (video or image capture mode) of the first window (310); displaying, when the terminal device (100) detects that the first window(310) in the fourth interface comprises the first object (bird), the first tracking identifier (tracking function) associated with the first object (bird) in the first window (310); and detecting, by the terminal device (100), a trigger operation of the user on the first tracking identifier (tracking function); and the displaying, by the terminal device (100), a first interface of a camera application comprises: displaying the first interface by the terminal device (100) in response to the trigger operation of the user on the first tracking identifier (tracking function), wherein the second window (500) in the first interface further comprises second recording duration information (video or image capture mode) of the second window (500), and the second recording duration information (video or image capture mode) of the second window (500) is different from the first recording duration (video or image capture mode) information of the first window (310) (Para 70, 145, 154, and 227-236). As to claim 18, Kang teaches further comprising: pausing recording (stop) in the second window (first pop-up window 500b) when the terminal device (100) detects that the first window (310) does not comprise the first object (bird), wherein the recording duration information of the first window (310) is continuously updated, and updating of the recording duration information of the second window (500b) is paused (stop) (Para 248). Kang teaches the control unit 180 may perform a tracking function to move the displayed portion according to the movement. In a state in which the first pop-up window 500b is output according to selection of the first portion 400b, when the graphic object corresponding to the first portion 400b disappears, the control unit 180 may make the first pop-up window 500b disappear (Para 248). As to claim 19, Li teaches wherein when the recording in the second window (620) is paused, the second window (620) further comprises a recording pause identifier and a sixth button (control 1802 for controlling video recording to pause), and after the pausing recording in the second window (620), the method further comprises: detecting, by the terminal device (100), a trigger operation of the user on the sixth button (1802); and stopping, in response to the trigger operation of the user on the sixth button (1802), displaying the second window (620), and continuing to display the first window (610), wherein the recording duration information of the first window (610) is continuously updated (Para 225-231). As to claim 20, Kang teaches wherein the first window (310) in the first interface further comprises a seventh button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched), and the method further comprises: detecting, by the terminal device (100), a trigger operation (touched or pressed) of the user on the seventh button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched); and storing, by the terminal device (100) in response to the trigger operation of the user on the seventh button (hardware key associated with the image capturing function is touched or pressed or when at least one of software keys or visual keys is touched), a first video and a second video (video or image capture mode), wherein the first video is associated with the first window, and the second video is associated with the second window (Para 70, 145, and 227-236). As to claim 21, Kang teaches wherein when the display state of the second window (410) during recording is switched, the stored second video also comprises pictures in different display states (Para 233-234). As to claims 22 and 23, these claims differ from claim 1 only in that the claim 1 is a method claim whereas claims 22 and 23 are a terminal device and computer-readable storage medium claims. Thus claim 22 and 23 are analyzed as previously discussed with respect to claim 1 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER K PETERSON whose telephone number is (571)270-1704. The examiner can normally be reached Monday-Friday 7AM-4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh N Tran can be reached at 571-2727564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER K PETERSON/Primary Examiner, Art Unit 2637 12/23/2025
Read full office action

Prosecution Timeline

Mar 18, 2024
Application Filed
Dec 24, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604081
Video Shooting Method and Electronic Device
2y 5m to grant Granted Apr 14, 2026
Patent 12604082
ELECTRONIC DEVICE CAPABLE OF ADJUSTING ANGLE OF VIEW AND OPERATING METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12604085
IMAGING METHOD AND IMAGING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12604095
SHAKE CORRECTION DEVICE AND IMAGING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598395
ELECTRONIC DEVICE, CONTROL METHOD THEREOF AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
92%
With Interview (+13.9%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 813 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month