Prosecution Insights
Last updated: April 19, 2026
Application No. 18/671,830

GENERATING COMPOSITE IMAGES

Non-Final OA §103
Filed
May 22, 2024
Examiner
LEE, JONATHAN S
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
493 granted / 585 resolved
+22.3% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
19 currently pending
Career history
604
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
28.1%
-11.9% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 6-11, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shimizu et al. (U.S. Pub. No. 2023/0247277), hereinafter “Shimizu”, in view of Jiang et al. (Quantitative Measurement of Perceptual Attributes and Artifacts for Tone-Mapped HDR Display, 2022, IEEE Transactions on Instrumentation and Measurement, Vol. 71, Pages 1-11), hereinafter “Jiang”. Claim 1 is met by the combination of Shimizu and Jiang, wherein Shimizu discloses: An apparatus for generating composite image data, the apparatus comprising (See the Abstract.): at least one memory (See [0034].); and at least one processor coupled to the at least one memory and configured to (See [0036]-[0037].): generate a first composite image based on a first image, a second image, and a first motion threshold (See [0074]: “Specifically, the first image processor 122 a of the video signal processor 122 outputs the electric signal outputted from the first imaging device 123 b to the main controller 101. The second image processor 122 b of the video signal processor 122 outputs the electric signal outputted from the second imaging device 124 b to the main controller 101.” Then see [0075]: “The main controller 101 detects the motion information (for example, motion vector) of the object on the basis of these inputted electric signals (image signals).” Next see [0076]: “Specifically, the main controller 101 compares the motion vector detected at the motion information detecting step S20 with the first motion threshold value and the second motion threshold value, thereby determining the motion of the object. Specifically, the main controller 101 reads out the first motion threshold value and the second motion threshold value stored in the storage 110 to the memory 104, and compares the motion vector with the first motion threshold value and the second motion threshold value.” Finally see Fig. 6, after it is determined at S102 that the motion vector is in between the first motion threshold value and the second motion threshold value, four kinds of HDR synthetic images are generated at S125. Any of the four HDR synthetic images meets the claimed “first composite image”, which is generated based on a first signal (meeting the claimed “first image”), a second signal (meeting the claimed “second image”), and both of the motion threshold values (one of which meets the claimed “first motion threshold”).); generate a second composite image based on the first image, the second image, and a second motion threshold (See Fig. 6, after it is determined at S102 that the motion vector is in between the first motion threshold value and the second motion threshold value, four kinds of HDR synthetic images are generated at S125. Any of the remaining three HDR synthetic images meets the claimed “second composite image”, which is generated based on a first signal (meeting the claimed “first image”), a second signal (meeting the claimed “second image”), and both of the motion threshold values (the other meets the claimed “second motion threshold”).); Shimizu does not disclose the following; however, Jiang discloses: compare a region of the first composite image with a region of the second composite image (See page 3, left column: “The QmTm presents clear protocols to quantitatively measure these attributes mentioned above. The measurement criteria include absolute energy deviation (AED), color difference, entropy, gradient similarity, and statistics of edge map so as to measure brightness, color, contrast, detail, and halo artifacts. Quality regression module based on machine learning techniques or linear regression (marked with red box) is adopted to establish the connection between these criterion features and subjective scores [i.e., mean opinion score (MOS)]. The optimal image can be automatically selected from a group of candidates by the proposed QmTm.” To select the optimal image, the examiner asserts that the QmTm compares regions of the candidate images. The examiner understands that these candidate images are generated by multi-fusion exposure fusion (see page 5, right column, section III.A.), which meet the claimed “composite image”.); and output image data based on the comparison (See page 3, left column: “TMOs render the dynamic effect HDR images in SDR display at the expense of some perceptual information.”). Shimizu and Jiang together disclose the limitations of claim 1. Jiang is directed to a similar field of art (display of HDR fusion images on SDR displays). Therefore, Shimizu and Jiang are combinable. Shimizu states in [0107] that after HDR synthetic image generation, “[t]he user selects the image or the like displayed on the display 121 from the displayed thumbnail images.” Modifying the system and method of Shimizu by substituting the manual selection of a displayed composite image in Shimizu for the automatic method of Jiang (to arrive at the claimed “compare a region of the first composite image with a region of the second composite image; and output image data based on the comparison”) would yield the expected and predictable result of reducing reliance on user input. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Shimizu and Jiang in this way. Claim 6 is met by the combination of Shimizu and Jiang, wherein The combination of Shimizu and Jiang discloses: The apparatus of claim 1, wherein: And Shimizu further teaches: the first image is captured using a first exposure duration; the second image is captured using a second exposure duration; and the second exposure duration is longer than the first exposure duration (See [0081]: “At Step S114, an HDR synthetic image with the first resolution is generated on the basis of the plurality of image signals (A0, A1) whose exposure amounts are different from each other and that is generated at Step S113.” Then see [0087].). Claim 7 is met by the combination of Shimizu and Jiang, wherein The combination of Shimizu and Jiang discloses: The apparatus of claim 1, wherein, And Shimizu further teaches: to generate the first composite image, the at least one processor is configured to: determine a motion value based on the first image and the second image (See [0086]: “Specifically, in a case where the main controller 101 determines that the magnitude of the motion vector is equal to or more than the first motion threshold value and is equal to or less than the second motion threshold value”.); and combine, based on the motion value and the first motion threshold, pixels of the first image with pixels of the second image to determine pixels for the first composite image (See [0094]: “At Step S125, an HDR synthetic image with the first resolution and an HDR synthetic image with the second resolution are generated on the basis of the plurality of image signals (A0, B0, B1, A0d, B0u) generated at Step S124. Specifically, the first image processor 122 a converts each of the electric signal (A0) with the first resolution and the electric signal (A0d) with the second resolution into digital image data with a gradation width of predetermined bits.”). Claim 8 is met by the combination of Shimizu and Jiang, wherein The combination of Shimizu and Jiang discloses: The apparatus of claim 1, wherein And Shimizu further teaches: the second motion threshold is greater than the first motion threshold (See [0086]: “Specifically, in a case where the main controller 101 determines that the magnitude of the motion vector is equal to or more than the first motion threshold value and is equal to or less than the second motion threshold value”. The second motion threshold value appears to be greater than the first motion threshold.). Claim 9 is met by the combination of Shimizu and Jiang, wherein The combination of Shimizu and Jiang discloses: The apparatus of claim 1, further comprising: And Shimizu further teaches: a first image signal processor (ISP) configured to generate the first composite image; and a second ISP configured to generate the second composite image (See the video signal processor in Fig. 12 that meets both the “first ISP” and “second ISP”.). Claim 10 is met by the combination of Shimizu and Jiang, wherein The combination of Shimizu and Jiang discloses: The apparatus of claim 1, further comprising And Shimizu further teaches: an image signal processor (ISP) configured to generate the first composite image and the second composite image (See the video signal processor in Fig. 12.). Claim 11 is met by the combination of Shimizu and Jiang for the reasons given in the treatment of claim 1. Claim 16 is met by the combination of Shimizu and Jiang for the reasons given in the treatment of claim 6. Claim 17 is met by the combination of Shimizu and Jiang for the reasons given in the treatment of claim 7. Claim 18 is met by the combination of Shimizu and Jiang for the reasons given in the treatment of claim 8. Claim 19 is met by the combination of Shimizu and Jiang for the reasons given in the treatment of claim 9. Claim 20 is met by the combination of Shimizu and Jiang for the reasons given in the treatment of claim 10. Allowable Subject Matter Claims 2-5 and 12-15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art of record, individually or in combination, does not disclose or suggest in dependent claims 2 and 12: “compare a first region of the first composite image to a first region of the second composite image to determine a first similarity score; compare a second region of the first composite image to a second region of the second composite image to determine a second similarity score; and generate a third composite image based on the first composite image, the second composite image, the first similarity score and the second similarity score.” The closest prior art of record is noted as follows: Xia et al. (Robust patchmatch HDR image reconstruction for deghosting, 2022, Pattern Recognition Letters, Vol. 154, Pages 68-74) discloses in Fig. 1 an algorithm in which patches from input exposure images are fused to generate intermediate images. Multi-exposure fusion (MEF) is then performed on the intermediate images to output fused images. However, there does not appear to be a reasonable combination of this two-stage fusion method with Shimizu that meets the limitations of claim 2. Dependent claims 3-5 and 13-15 include the limitations of dependent claim 2 and are also indicated as having the same allowable subject matter. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN S LEE whose telephone number is (571)272-1981. The examiner can normally be reached 11:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jonathan S Lee/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

May 22, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602807
METHOD FOR SUBPIXEL DISPARITY CALCULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602785
TRAINING A MACHINE LEARNING MODEL TO ASSESS EMBRYO CHARACTERISTICS FROM VIDEO IMAGE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597108
METHOD AND APPARATUS TO PERFORM A WIRELINE CABLE INSPECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597110
IMAGE RECOGNITION METHOD, APPARATUS AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12584727
DIMENSION MEASUREMENT METHOD AND DIMENSION MEASUREMENT DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month