Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,714

IMAGE PROCESSING METHOD, INTELLIGENT TERMINAL, AND STORAGE MEDIUM

Final Rejection §103
Filed
Mar 24, 2024
Examiner
PATEL, SHIVANG I
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Shenzhen Transsion Holdings Co. Ltd.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
309 granted / 415 resolved
+12.5% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
22 currently pending
Career history
437
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 9-13, filed 1/9/2026 with respect to 35 USC §101 rejection of claims 1-20 have been fully considered and are persuasive. Applicant has amended claims to include intelligent terminal and image processor. The 35 USC §101 rejection of claims 1-20has been withdrawn. Applicant’s arguments, see page 13-17, filed 1/9/2026with respect to 35 USC §102 rejection of claims 1-20 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection and the arguments were directed towards the claims as amended. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1,4-11, 13-16, 18-20 is/are rejected under 35 U.S.C. 103 as being obvious over Newman (US 20180166102 A1) and Oh et al (US 20180077453 A1) Regarding claim 1, Newman discloses an image processing method the image processing method is implemented by an intelligent terminal, ([0004] apparatus and methods for embedding metadata into one or more commonly used video storage format), comprising: S1: acquiring an image data stream transmitted by an optical imaging hardware of the intelligent terminal ([0043] acquiring video, e.g., using an action camera device); S2: processing the image data stream according to an image processing instruction by at least one image processor of the intelligent terminal ([0054] capture device 130 may include a multimedia processing component 114 in FIG. 1A and/or 220 in FIG. 2A, configured to produce a multimedia stream (denoted by pathway 124 in FIG. 1B) consisting of a video track and/or audio track) determining or generating image processing information, and obtaining a target image ([0054] Information from one or more metadata sources (e.g., 102, 104, 112, 114 in FIG. 1B) may be combined with the video and/or audio tracks by multiplexor component). wherein the image data stream comprises image data and basic image information ([0052] information may comprise full resolution (e.g., 3840 pixels by 2160 pixels at 60 fps) video stream, lower-resolution (e.g., 1280×720 pixels) and/or lower frame rate (e.g., 30 fps) video stream, video duration (e.g., elapsed recoding time), metadata (e.g., heart rate provided by the device 154), and/or other information), and the S2 further comprises: Oh discloses processing the image data according to the image processing instruction by at least one image processor of the intelligent terminal to obtain target image data, and determining or generating the image processing information ([0040] HDR video production device may convert a natural scene into digital video. For example, the capture/film scanner may be a device that converts optical images obtained by a video camera, a camera, a scanner and the like into digital images); determining whether the processing causes a change in the basic image information ([0041] post-production block (mastering unit) 102 may receive the raw HDR video and output mastered HDR video and HDR metadata.); if the change exists in the basic image information, determining or generating target basic image information, updating the image data stream based on the target basic image information, the target image data and the image processing information, to obtain a target image data stream, and obtaining the target image according to the target image data stream ([0045] The metadata processor may check whether the stored HDR metadata has been changed by checking a set number or a version number included in the HDR metadata and update existing HDR metadata when the stored HDR metadata has been changed. The metadata processor may output the HDR metadata to the post processor according to timing information received from the synchronizer.). and/or if no change exists in the basic image information, updating the image data stream based on the target image data and the image processing information, to obtain the target image data stream, and obtaining the target image according to the target image data stream ([0044] The decoder 105 may receive and decode the HDR stream. In this process, the decoder may output decoded HDR video and HDR metadata. The decoded HDR video may be output to the post processor and the HDR metadata may be output to the metadata processor.). Newman and Oh are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify apparatus and methods for embedding metadata of Newman to include processing the image data according to the image processing instruction by at least one image processor of the intelligent terminal to obtain target image data, and determining or generating the image processing information; determining whether the processing causes a change in the basic image information; if the change exists in the basic image information, determining or generating target basic image information, updating the image data stream based on the target basic image information, the target image data and the image processing information, to obtain a target image data stream, and obtaining the target image according to the target image data stream; and/or if no change exists in the basic image information, updating the image data stream based on the target image data and the image processing information, to obtain the target image data stream, and obtaining the target image according to the target image data stream as described by Oh. The motivation for doing so would have been to generating video data; generating a broadcast signal including the generated video data and video quality enhancement metadata; and transmitting the generated broadcast signal (Oh,[0005]). Therefore, it would have been obvious to combine Newman and Oh to obtain the invention as specified in claim 1. Regarding claim 4, Newman discloses wherein the image processing information comprises at least one of following features: the image processing information, in the target image data stream, is located between the target basic image information and the target image data the image processing information, in the target image data stream, is located between the basic image information and the target image data ([0115] The number samples (items) in the metadata track does not have to match the number of frames in the video track. The metadata items may be evenly distributed over the metadata payload time window). Regarding claim 5, Newman discloses wherein the image data stream further comprises an identifier of imaging information, wherein the updating the image data stream based on the target basic image information, the target image data and the image processing information ([0119] TIMG identifier may be followed by 8-bit MetadataItemType (‘f’); followed by MetadataItemSize field (4 bytes, one 4-byte float values),) comprises: determining whether the processing causes the change in the imaging information ([0110] Video frame related camera internal metadata may be characterized by a regular payload, with a predictable number of entries);; if it is determined that the change exists in the imaging information, determining or generating target imaging information, and updating the image data stream based on the target basic image information, the target imaging information, the target image data and the image processing information ([0113] slowly varying (e.g., relative video information frame rate) metadata (e.g., heart rate, average position, ambient pressure, ambient temperature, and/or other information), the metadata track (e.g., track 340 in FIG. 3) may be configured to store metadata at a time scale corresponding to multiple frames,); and/or if it is determined that no change exists in the imaging information, updating the image data stream based on the target basic image information, the target image data and the image processing information ([0115] The number samples (items) in the metadata track does not have to match the number of frames in the video track. The metadata items may be evenly distributed over the metadata payload time window, in some implementation). Regarding claim 6, Newman discloses wherein the target imaging information and the image processing information, in the target image data stream, are located between the target basic image information and the target image data, and the target imaging information is located before the image processing information ([0123] . Metadata device (source) may be declared as “sticky”); both the imaging information and the image processing information, in the target image data stream, are located between the target basic image information and the target image data, and the imaging information is located before the image processing information ([0115] The number samples (items) in the metadata track does not have to match the number of frames in the video track. The metadata items may be evenly distributed over the metadata payload time window). Regarding claim 7, Newman discloses if the image data stream comprises original image processing information, adding the image processing information to the original image processing information in the image data stream ([0062] , the sensor controller 220 or the microcontroller 202 performs operations on the received metadata to generate additional metadata information.). Regarding claim 8, Newman discloses wherein the original image processing information comprises a processing-reversible identifier ([0077] The MetadataTag field (402, 412, 422) may comprise a 32-bit four character code (fourCC) configured to identify metadata sensor, and/or type of metadata sensor.), and the method further comprises: if the processing-reversible identifier indicates reversible processing, eliminating the image data from the target image data stream ([0077] The use of fourCC tag configuration provides for readability of the file by a human as character codes may be easily discerned when, e.g., viewing the multimedia stream using a hex editor tool). Regarding claim 9, Newman discloses wherein the basic image information comprises at least one of the following: a length of the basic image information, a type identifier of the image data, a length of the image data, a width of the image data, a color space of the image data, a bit width of the image data, or a storage mode of the image data ([0078] Table 2 illustrates exemplary metadata tag codes for a plurality of telemetry metadata sources in accordance with one or more implementations). Regarding claim 10, Newman discloses wherein the image processing information comprises at least one of the following: a length of the image processing information, a processing identifier, a processing-reversible identifier, processing description type information, a prior-to-processing data preservation identifier, or image data ([0079] Table 3 illustrates exemplary metadata tag codes for a plurality of image acquisition parameters employed by camera sensor and/or image processor, [e.g., component 220 in FIG. 2A in accordance with one or more implementations.). Regarding claim 11, Newman discloses an image processing method the image processing method is implemented by an intelligent terminal ([0004] apparatus and methods for embedding metadata into one or more commonly used video storage format), comprising: S10: acquiring an image data stream transmitted by an optical imaging hardware of the intelligent terminal,, wherein the image data stream comprises image data and basic image information ([0043] acquiring video, e.g., using an action camera device); S20: processing the image data according to an image processing instruction by at least one image processor of the intelligent terminal ([0054] capture device 130 may include a multimedia processing component 114 in FIG. 1A and/or 220 in FIG. 2A, configured to produce a multimedia stream (denoted by pathway 124 in FIG. 1B) consisting of a video track and/or audio track), and determining or generating target basic image information, to update the image data stream, and obtaining a target image ([0054] Information from one or more metadata sources (e.g., 102, 104, 112, 114 in FIG. 1B) may be combined with the video and/or audio tracks by multiplexor component) wherein the S20 further comprises: Oh discloses processing the image data according to the image processing instruction by the at least one image processor of the intelligent terminal to obtain target image data, and determining or generating image processing information ([0041] post-production block (mastering unit) 102 may receive the raw HDR video and output mastered HDR video and HDR metadata.);; updating the image data stream based on the target basic image information, the target image data and the image processing information, to obtain a target image data stream, and obtaining the target image according to the target image data stream ([0045] The metadata processor may check whether the stored HDR metadata has been changed by checking a set number or a version number included in the HDR metadata and update existing HDR metadata when the stored HDR metadata has been changed. The metadata processor may output the HDR metadata to the post processor according to timing information received from the synchronizer.). Newman and Oh are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify apparatus and methods for embedding metadata of Newman to include processing the image data according to the image processing instruction by at least one image processor of the intelligent terminal to obtain target image data, and determining or generating the image processing information; determining whether the processing causes a change in the basic image information; if the change exists in the basic image information, determining or generating target basic image information, updating the image data stream based on the target basic image information, the target image data and the image processing information, to obtain a target image data stream, and obtaining the target image according to the target image data stream; and/or if no change exists in the basic image information, updating the image data stream based on the target image data and the image processing information, to obtain the target image data stream, and obtaining the target image according to the target image data stream as described by Oh. The motivation for doing so would have been to generating video data; generating a broadcast signal including the generated video data and video quality enhancement metadata; and transmitting the generated broadcast signal (Oh,[0005]). Therefore, it would have been obvious to combine Newman and Oh to obtain the invention as specified in claim 11. Regarding claim 13, Newman discloses wherein the image data stream further comprises an identifier of imaging information, wherein the updating the image data stream based on the target basic image information, the target image data and the image processing information ([0119] TIMG identifier may be followed by 8-bit MetadataItemType (‘f’); followed by MetadataItemSize field (4 bytes, one 4-byte float values),) comprises: Oh discloses determining whether the processing causes the change in the imaging information if change exists in the imaging information, determining or generating target imaging information, and updating the image data stream based on the target basic image information, the target imaging information, the target image data and the image processing information ([0045] The metadata processor may check whether the stored HDR metadata has been changed by checking a set number or a version number included in the HDR metadata and update existing HDR metadata when the stored HDR metadata has been changed. The metadata processor may output the HDR metadata to the post processor according to timing information received from the synchronizer.); and/or if no change exists in the imaging information, updating the image data stream based on the target basic image information, the target image data and the image processing information ([0044] The decoder 105 may receive and decode the HDR stream. In this process, the decoder may output decoded HDR video and HDR metadata. The decoded HDR video may be output to the post processor and the HDR metadata may be output to the metadata processor.) Newman and Oh are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify apparatus and methods for embedding metadata of Newman to include determining whether the processing causes the change in the imaging information , if change exists in the imaging information, determining or generating target imaging information, and updating the image data stream based on the target basic image information, the target imaging information, the target image data and the image processing information; and/or if no change exists in the imaging information, updating the image data stream based on the target basic image information, the target image data and the image processing information as described by Oh. The motivation for doing so would have been to generating video data; generating a broadcast signal including the generated video data and video quality enhancement metadata; and transmitting the generated broadcast signal (Oh,[0005]). Therefore, it would have been obvious to combine Newman and Oh to obtain the invention as specified in claim 13. Regarding claim 14, Newman discloses wherein the target imaging information and the image processing information, in the target image data stream, are located between the target basic image information and the target image data, and the target imaging information is located before the image processing information ([0123] . Metadata device (source) may be declared as “sticky”); both the imaging information and the image processing information, in the target image data stream, are located between the target basic image information and the target image data, and the imaging information is located before the image processing information ([0115] The number samples (items) in the metadata track does not have to match the number of frames in the video track. The metadata items may be evenly distributed over the metadata payload time window). Regarding claim 15, Newman discloses wherein the imaging information comprises at least one of the following: a length of the imaging information, a shutter time of an imaging device, photo-sensibility of the imaging device, an aperture of the imaging device, a focal length of the imaging device, gyroscope information of the imaging device, an acceleration of the imaging device, geographic location information of the imaging device, or image rotation angle information of the imaging device ([0078] Table 2 illustrates exemplary metadata tag codes for a plurality of telemetry metadata sources in accordance with one or more implementations). Regarding claim 16, Newman discloses an intelligent termina ([0004] apparatus and methods for embedding metadata into one or more commonly used video storage format)l, comprising: a memory and a processor ([0060] one or more microcontrollers 202 (such as microprocessors) that control the operation and functionality of the capture device), wherein an image processing program is stored in the memory, and the image processing program, when executed by the processor ([0060] A system memory 204 is configured to store executable computer instructions that, when executed by the microcontroller 202, perform various camera functionalities), causes the processor to: acquire an image data stream transmitted by an optical imaging hardware of the intelligent terminal ([0043] acquiring video, e.g., using an action camera device); process the image data stream according to an image processing instruction ([0054] capture device 130 may include a multimedia processing component 114 in FIG. 1A and/or 220 in FIG. 2A, configured to produce a multimedia stream (denoted by pathway 124 in FIG. 1B) consisting of a video track and/or audio track), determine or generate image processing information, and obtain a target image wherein the image data stream comprises image data and basic image information ([0054] Information from one or more metadata sources (e.g., 102, 104, 112, 114 in FIG. 1B) may be combined with the video and/or audio tracks by multiplexor component) the processor is further caused to: Oh discloses process the image data according to the image processing instruction to obtain target image data, and determine or generate the image processing information ([0041] post-production block (mastering unit) 102 may receive the raw HDR video and output mastered HDR video and HDR metadata.); determine whether the process causes a change in the basic image information ([0045] The metadata processor may check whether the stored HDR metadata has been changed); if the change exists in the basic image information, determine or generate target basic image information, update the image data stream based on the target basic image information, the target image data and the image processing information, to obtain a target image data stream, and obtain the target image according to the target image data stream; and/or if no change exists in the basic image information, update the image data stream based on the target image data and the image processing information, to obtain the target image data stream, and obtain the target image according to the target image data stream. ([0045] The metadata processor may check whether the stored HDR metadata has been changed by checking a set number or a version number included in the HDR metadata and update existing HDR metadata when the stored HDR metadata has been changed. The metadata processor may output the HDR metadata to the post processor according to timing information received from the synchronizer.). Newman and Oh are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify apparatus and methods for embedding metadata of Newman to include process the image data according to the image processing instruction to obtain target image data, and determine or generate the image processing information; determine whether the process causes a change in the basic image information; if the change exists in the basic image information, determine or generate target basic image information, update the image data stream based on the target basic image information, the target image data and the image processing information, to obtain a target image data stream, and obtain the target image according to the target image data stream; and/or if no change exists in the basic image information, update the image data stream based on the target image data and the image processing information, to obtain the target image data stream, and obtain the target image according to the target image data stream as described by Oh. The motivation for doing so would have been to generating video data; generating a broadcast signal including the generated video data and video quality enhancement metadata; and transmitting the generated broadcast signal (Oh,[0005]). Therefore, it would have been obvious to combine Newman and Oh to obtain the invention as specified in claim 16. Regarding claim 18, Newman discloses a memory and a processor, wherein an image processing program is stored in the memory, and the image processing program, when executed by the processor, implements steps of the image processing method as described in claim 11 ([0060] A system memory 204 is configured to store executable computer instructions that, when executed by the microcontroller 202, perform various camera functionalities) Regarding claim 19, Newman discloses a non-transitory readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements steps of the image processing method as described in claim 1 ([0060] A system memory 204 is configured to store executable computer instructions that, when executed by the microcontroller 202, perform various camera functionalities). Regarding claim 20, Newman discloses a non-transitory readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements steps of the image processing method as described in claim 11 ([0060] A system memory 204 is configured to store executable computer instructions that, when executed by the microcontroller 202, perform various camera functionalities). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANG I PATEL/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Mar 24, 2024
Application Filed
Oct 08, 2025
Non-Final Rejection — §103
Jan 09, 2026
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602847
SYSTEMS AND METHODS FOR LAYERED IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12599838
APPARATUS AND METHODS FOR RECORDING AND REPORTING ABUSIVE ONLINE INTERACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592004
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591947
DISTORTION-BASED IMAGE RENDERING
2y 5m to grant Granted Mar 31, 2026
Patent 12584296
Work Machine Display Control System, Work Machine Display System, Work Machine, Work Machine Display Control Method, And Work Machine Display Control Program
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.5%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month