Prosecution Insights
Last updated: April 19, 2026
Application No. 18/605,699

IMAGE PROCESSING METHOD, DEVICE, AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Mar 14, 2024
Examiner
SAFAIPOUR, BOBBAK
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Shenzhen Transsion Holdings Co. Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
97%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
933 granted / 1085 resolved
+24.0% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
30 currently pending
Career history
1115
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
26.6%
-13.4% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1085 resolved cases

Office Action

§102 §103
DETAILED ACTION Information Disclosure Statement The information disclosure statement submitted on 03/14/2024 has been considered by the Examiner and made of record in the application file. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 8, 11-12, 14-17 and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Talvala (US 2014/0016004 A1). Regarding claims 1 and 15, Talvala discloses an electronic device, comprising: a processor and a memory, wherein the memory stores computer execution instructions; and the computer execution instructions, when executed by the processor, cause the processor to: (paragraphs 17-18) obtain first image data; (abstract, paragraphs 18, 27 and 29: Talvala teaches obtaining first image data by acquiring Initial Sensor Data from either the sensor or from a Raw Input, initiated by an acquire image instruction. Talvala also teaches acquiring a RAW image and reprocessing a stored RAW file from memory.) determine or generate a target data stream according to a data stream format; and (paragraphs 21 and 30: Talvala teaches generating a target data stream by outputting image-related data and metadata into a structured bundle. Metadata is output to an Output Frame Metadata Queue and, upon a getFrame() request, the RAW image and metadata are moved into a Frame 630 that consolidates the image with associated metadata fields. Additionally , Talvala teaches combining multiple image outputs such as YUV, compressed, video with raw output using a multiplexer and outputting the combined stream to the application.) perform image processing on the first image data according to the target data stream. (paragraphs 20-21 and 30: Talvala teaches performing image processing by preprocessing the raw input using a preprocessor under a preprocessing instruction, for example noise reduction, demosaic, color correction, producing preprocessed data and converted output (YUV, compressed, video) and further processing for viewable formats (JPEG, YPUV) for display or use by the application.) Regarding claim 8, Talvala discloses an image processing method, comprising following steps: S10, obtaining first image data according to a preset rule; (paragraphs 25-29: Talvala teaches obtaining image data according to a rule via a Capture Request containing criteria for sensor control, lens control, 3A control, processing control, and statistics control, and by executing an ordered list of requests in an input request queue to drive acquisition and preprocessing.) S20, determining or generating a target data stream according to the first image data; and (paragraphs 21 and 30: Talvala teaches that after acquiring and processing an image, the sensor outputs an image and metadata which are assembled into Frame 630 and/or combined through a multiplexer to provide a bundled output stream to the application.) S30, performing image processing on the first image data based on the target data stream. (paragraphs 20-21, 30: Talvala teaches performing preprocessing and outputting processed formats and further processing into viewable formats, where the application may further process the multiplexer output.) Regarding claim 2, Talvala discloses the claimed invention wherein the step S3 comprises: parsing the target data stream, and performing image processing on the first image data to obtain second image data and characteristic data of the second image data. (paragraphs 19 and 30: Talvala teaches generating and outputting characteristic data such as statistics (date, resolution, flash) and histogram outputs, and storing metadata and statistical output with the image in frame 630) Regarding claim 3, Talvala discloses the claimed invention wherein the step S1 comprises at least one of: obtaining the first image data according to an imaging control instruction and/or an image obtaining instruction; (paragraphs 19 and 30: Talvala teaches that the API receives image capture instructions and uses methods such as stream(), capture(), and reprocess() to control acquisition. It also teaches obtaining image data from memory via Raw Input and reprocessing a stored RAW file.) obtaining characteristic data of the first image data. (paragraphs 19 and 30: Talvala teaches obtaining characteristic data by retrieving statistics (date, resolution, flash) and computing a histogram, and also outputting metadata associated with the image and consolidating in in Frame 630 (Basic Metadata, Final Settings, Statistical Output)). Regarding claim 11, Talvala discloses the claimed invention wherein the step S10 comprises: in response to a call to a first interface, obtaining the preset rule from an entry parameter of the first interface, and obtaining the first image data according to the preset rule. (paragraphs 14, 22, 24-25: Talvala teaches that the camera API is initiated by commands such as getCameraInfo() and open(ID), and that open(ID) enables creation of pipelines and createCaptureRequest(), where the capture request includes criteria that control how image data is acquired.) Regarding claim 12, Talvala discloses the claimed invention wherein the step S20 comprises: in response to a data request from an algorithm module to which a data stream flows, obtaining data required by the algorithm module; (paragraph 30: Talvala teaches that metadata is stored in an Output Frame Metadata Queue and, when requested by getFrame(), the RAW image and metadata are delivered into Frame 630.)and after obtaining the data required by the algorithm module, assigning the obtained data to the data stream to obtain the target data stream. (paragraph 30: Talvala teaches assigning the RAW image and its metadata into Frame 630, which consolidates the image with associated field (Capture Request, Final Settings, Basic Metadata, Statistical Output, ByteBuffer)). Regarding claim 14, Talvala discloses the claimed invention wherein the step S30 comprises: parsing the target data stream, (paragraph 30: Talvala teaches that, upon a getFrame() request, image and metadata are moved into Frame 630 with defined subfields, enabling access of the contents.) and performing image processing on the first image data to obtain second image data and/or characteristic data of the second image data. (paragraphs 19-21 and 30: Talvala teaches second image data via preprocessing and conversion outputs and characteristics data via statistics and metadata in Frame 630.) Regarding claim 16, Talvala discloses the claimed invention wherein the computer execution instructions, when executed by the processor, cause the processor to: parse the target data stream, (paragraph 30: Talvala teaches that image and metadata are packaged in Frame 630 when requested by getFrame(), enabling structures access to the contents.) and perform image processing on the first image data to obtain second image data and characteristic data of the second image data. (paragraphs 19-21 and 30: Talvala teaches producing processed image outputs and producing characteristic data.) Regarding claim 17, Talvala discloses the claimed invention wherein the computer execution instructions, when executed by the processor, cause the processor to perform at least one of: obtaining the first image data according to an imaging control instruction and/or an image obtaining instruction; (paragraphs 5, 18, 26 and 29: Talvala teaches capture instructions via API and acquisition control via stream(), capture(), reprocess(), including obtaining image data from memory.) obtaining characteristic data of the first image data. (paragraphs 19 and 30: Talvala teaches generating characteristic data such as statistics and outputting metadata consolidated in frame 630.) Regarding claim 19, Talvala discloses the claimed invention wherein a processor and a memory, wherein the memory stores computer execution instructions; and when the computer execution instructions are executed by the processor, the image processing method according to claim 8 is implemented. (paragraphs 17-18) Regarding claim 20, Talvala discloses the claimed invention wherein a non-transitory computer-readable storage medium, wherein computer execution instructions are stored in the computer-readable storage medium, and when the computer execution instructions are executed by a processor, the image processing method according to claim 1 is implemented. (paragraphs 17-18) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 4-7, 9-10, 13 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Talvala (US 2014/0016004 A1) in view of Newman (US 9,681,111 B1). Regarding claim 4, Talvala discloses the claimed invention except for wherein the characteristic data of the first image data comprises at least one of: basic image information, imaging information and semantic information of the first image data; after the obtaining the characteristic data of the first image data, the method comprises at least one of: assigning the basic image information of the first image data to the target data stream; assigning the imaging information of the first image data to the target data stream; assigning the semantic information of the first image data to the target data stream. In related art, Newman discloses the characteristic data of the first image data comprises at least one of: basic image information, imaging information and semantic information of the first image data; (col. 5, line 53 to col. 6, line 25; also see table 3: Newman teaches metadata captured with video, including image acquisition parameters and other metadata like motion, orientation, position and highlight, activity-type metadata.) after the obtaining the characteristic data of the first image data, the method comprises at least one of: assigning the basic image information of the first image data to the target data stream; assigning the imaging information of the first image data to the target data stream; assigning the semantic information of the first image data to the target data stream. (col. 2, lines 1-24; Newman teaches embedding metadata into a combined multimedia stream by generating a sensor track and combining it with the video track, i.e., assigning the characteristic data into the multimedia stream.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Newman into the teachings of Talvala for effective capturing and storing of video content. Regarding claim 5, Talvala, as modified by Newman, discloses the claimed invention wherein the obtaining the characteristic data of the first image data comprises at least one of: obtaining the basic image information of the first image data through an imaging module of a photography system; obtaining the imaging information of the first image data through at least one of the imaging module and an auxiliary imaging module of the photography system. (col. 1, lines 48-67) Regarding claim 6, Talvala discloses the claimed invention except for wherein the step S2 comprises at least one of: determining or generating the target data stream by arranging at least two data items sequentially in a first specific order; determining or generating the target data stream by arranging respective pieces of characteristic information in a third specific order. In related art, Newman discloses at least one of: determining or generating the target data stream by arranging at least two data items sequentially in a first specific order; (col. 1, lines 48-67: Newman teaches creating a combined multimedia stream with multiple items (tracks, records) in a defined structure (video track and text track) and within the metadata track, records are written in a defined sequence.) determining or generating the target data stream by arranging respective pieces of characteristic information in a third specific order. (col. 12, lines 30-42: Newman teaches a self-describing metadata record structure with fields arranged in a prescribed order and tag, type, size, repeat describing how to parse payload.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Newman into the teachings of Talvala for effective capturing and storing of video content. Regarding claim 7, Talvala, as modified by Newman, discloses the claimed invention wherein further comprising at least one of: each of the data items comprises at least one type of characteristic information; respective pieces of characteristic information in each of the data items are arranged in a second specific order. (Newman: table 3) Regarding claim 9, Talvala discloses the claimed invention except for wherein the step S10 comprises at least one of: if the preset rule instructs to add basic image information to a data stream, obtaining basic image information of the first image data through an imaging module of a photography system; if the preset rule instructs to add imaging information to a data stream, obtaining imaging information of the first image data through the imaging module and/or an auxiliary imaging module of the photography system; if the preset rule instructs to add at least one type of semantic information to a data stream, obtaining at least one type of semantic information of the first image data. In related art, Newman discloses step S10 comprises at least one of: if the preset rule instructs to add basic image information to a data stream, obtaining basic image information of the first image data through an imaging module of a photography system; (Newman: table 3) if the preset rule instructs to add imaging information to a data stream, obtaining imaging information of the first image data through the imaging module and/or an auxiliary imaging module of the photography system; (col. 9, line 52 to col. 10, line 22) if the preset rule instructs to add at least one type of semantic information to a data stream, obtaining at least one type of semantic information of the first image data. (col. 6, lines 14-25) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Newman into the teachings of Talvala for effective capturing and storing of video content. Regarding claim 10, Talvala, as modified by Newman, discloses the claimed invention wherein if the preset rule instructs to add at least one type of semantic information to a data stream, obtaining at least one type of semantic information of the first image data comprises at least one of: if the preset rule instructs to add basic semantic information to a data stream, obtaining basic semantic information of the first image data; if the preset rule instructs to add optional semantic information to a data stream, obtaining at least one type of optional semantic information of the first image data. (col. 17, lines 1-20) Regarding claim 13, Talvala discloses the claimed invention except for wherein in response to the data request from the algorithm module to which the data stream flows, obtaining the data required by the algorithm module comprises: in response to a call to a second interface from any algorithm module, determining or obtaining the data required by the algorithm module according to an input parameter of the second interface, and transmitting the obtained data to the algorithm module. In related art, Newman discloses in response to the data request from the algorithm module to which the data stream flows, obtaining the data required by the algorithm module comprises: in response to a call to a second interface from any algorithm module, determining or obtaining the data required by the algorithm module according to an input parameter of the second interface, and transmitting the obtained data to the algorithm module. (col. 23-col. 24, see “Listing 14”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Newman into the teachings of Talvala for effective capturing and storing of video content. Regarding claim 18, Talvala discloses the claimed invention except for wherein the computer execution instructions, when executed by the processor, cause the processor to perform at least one of: determining or generating the target data stream by arranging at least two data items sequentially in a first specific order; determining or generating the target data stream by arranging respective pieces of characteristic information in a third specific order. In related art, Newman discloses the computer execution instructions, when executed by the processor, cause the processor to perform at least one of: determining or generating the target data stream by arranging at least two data items sequentially in a first specific order; (col. 8, lines 49-55) determining or generating the target data stream by arranging respective pieces of characteristic information in a third specific order. (Table 3) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Newman into the teachings of Talvala for effective capturing and storing of video content. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOBBAK SAFAIPOUR whose telephone number is (571)270-1092. The examiner can normally be reached Monday - Friday, 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BOBBAK SAFAIPOUR/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Mar 14, 2024
Application Filed
Jan 19, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597155
TRACKING THREE-DIMENSIONAL GEOMETRIC SHAPES
2y 5m to grant Granted Apr 07, 2026
Patent 12597113
FABRIC DEFECT DETECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12591987
System and Method for Simultaneously Registering Multiple Lung CT Scans for Quantitative Lung Analysis
2y 5m to grant Granted Mar 31, 2026
Patent 12586140
Automated Property Inspections
2y 5m to grant Granted Mar 24, 2026
Patent 12586240
IMAGE PROCESSING APPARATUS AND CONTROL METHOD FOR SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
97%
With Interview (+10.7%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1085 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month