Prosecution Insights
Last updated: April 19, 2026
Application No. 18/411,529

TONE MAPPING METHOD AND APPARATUS FOR PANORAMIC IMAGE

Non-Final OA §102
Filed
Jan 12, 2024
Examiner
OSINSKI, MICHAEL S
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
466 granted / 619 resolved
+13.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. Applicant’s election of Species I in the reply filed on 12/17/2025 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)). Claims 1-20 are currently pending within this application with claims 8-13 being withdrawn for being directed towards a non-elected Species. Information Disclosure Statement 2. The information disclosure statement(s) (IDS) submitted on 11/1/2024 and 1/15/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Foreign Priority 3. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 for claiming foreign priority to application CN 202110794014.5, filed on 7/14/2021. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-3, 5-7, 14-16, 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kunkel (US PGPub 2018/0374192) [hereafter Kunkel]. 5. As to claim 1, Kunkel discloses a tone mapping method (operational processing operations as shown in Figure 8 executed by a computing device shown in Figure 9 that includes a processor and memory that causes the processor to execute the operational processing operations and embody an image processing system shown in Figures 6-7) for a panoramic image (spherical image 112 as shown in Figure 1), comprising: determining one or more target metadata information units (image metadata such as transfer function parameters including minimum/maximum/mid-tone luminance values) of a first pixel from a plurality of metadata information units (as shown in spherical metadata map shown in Figure 6) obtained by parsing a bitstream (panorama video 108 generated by video stitching operations 106 of multiple individual video stream 104 as shown in Figure 1), the first pixel is any pixel in a panoramic video two-dimensional planar projection (projections and corresponding viewport images), the plurality of metadata information units correspond to a plurality of segmented regions comprised in a panoramic video three-dimensional spherical representation panoramic image (spherical image 504 as shown in Figure 6) having a mapping relationship with the panoramic video two-dimensional planar projection; and performing tone mapping on a pixel value of the first pixel based on the one or more target metadata information units, to obtain a target tone mapping value of the first pixel (Paragraphs 0039-0043, 0045-0050, 0056-0059, 0067-0068, 0072-0078, 0080, 0084-0085, 0121-0126, 0134-0137, 0149-0158, 0162, 0166-0171, 0190-0195, 0204-0210, image metadata parameters are included within and extracted from a video stream of a 3D panoramic video which correspond to pixel values of the panoramic video which are projected onto a 2D plane to form a 2D spherical image and corresponding viewport image wherein the image metadata parameters are used to form a metadata spherical map that includes regions of metadata parameters which corresponding to the regions of the spherical image and viewport image and are used by an image rendering component in order to perform tone mapping the image data of the viewport images to optimize them for rendering on a corresponding display). 6. As to claim 2, Kunkel discloses before the determining the one or more target metadata information units of the first pixel from plurality of metadata information units, the method further comprises: segmenting the panoramic video three-dimensional spherical representation panoramic image in a preset segmentation manner (as shown in Figure 6), to obtain the plurality of segmented regions; or segmenting the panoramic video three-dimensional spherical representation panoramic image in a segmentation manner obtained by parsing the bitstream, to obtain the plurality of segmented regions; or obtaining the plurality of segmented regions based on indication information of the plurality of segmented regions obtained by parsing the bitstream (Paragraphs 0059, 0121-0122, 0137, the spherical image is divided into a plurality of preset view angle pixels separated in the latitude and longitude ranges as shown in Figures 1 and 6). 7. As to claim 3, Kunkel discloses the plurality of segmented regions are obtained by segmenting the panoramic video three-dimensional spherical representation panoramic image based on a preset angle of view separation rule; or the plurality of segmented regions are obtained by segmenting the panoramic video three-dimensional spherical representation panoramic image in a latitude direction; and/or the plurality of segmented regions are obtained by segmenting the panoramic video three-dimensional spherical representation panoramic image in a longitude direction (Paragraphs 0059, 0121-0122, 0137, the spherical image is divided into a plurality of preset view angle pixels separated in the latitude and longitude ranges as shown in Figures 1 and 6). 8. As to claim 5, Kunkel discloses the determining the one or more target metadata information units of first pixel from the plurality of metadata information units comprises: determining a correspondence between the plurality of metadata information units and the plurality of segmented regions (correspondence between projection 608 of metadata spherical map 602 and projection 604 of spherical video image), wherein one metadata information unit corresponds to one or more segmented regions; determining one or more target segmented regions based on a specified mapping point (center coordinate of viewport image); and when there is only one target segmented region, determining a metadata information unit corresponding to the one target segmented region as a target metadata information unit; or when there are a plurality of target segmented regions, determining metadata information units respectively corresponding to the plurality of target segmented regions as a plurality of target metadata information units (Paragraphs 0121-0126, 0132-0137, 0143, correspondences between regions within the projections of the spherical image and the spherical metadata map are determined along with a center coordinate of a viewport image in order to determine appropriate metadata transformation parameters to transform the pixel data of a viewport image to be displayed where one target segmented region of the spherical image corresponding to the viewport image produces metadata corresponding to the target segmented region of the spherical image). 9. As to claim 6, Kunkel discloses the determining the correspondence between the plurality of metadata information units and the plurality of segmented regions comprises: extracting a current metadata information unit from the plurality of metadata information units in a first preset sequence; extracting a current segmented region from the plurality of segmented regions in a second preset sequence; and establishing a correspondence between the current segmented region and the current metadata information unit; or wherein the determining the correspondence between the plurality of metadata information units and the plurality of segmented regions comprises: extracting a current metadata information unit from the plurality of metadata information units in the first preset sequence; extracting a current segmented region from the plurality of segmented regions in a traversing sequence obtained by parsing the bitstream; and establishing a correspondence between the current segmented region and the current metadata information unit; or wherein the determining the correspondence between the plurality of metadata information units and the plurality of segmented regions comprises: extracting a current metadata information unit from the plurality of metadata information units in the first preset sequence; obtaining one or more coordinates comprised in the current metadata information unit; determining one or more mapping points in the panoramic video three-dimensional spherical representation panoramic image based on the one or more coordinates; and when there is only one mapping point, establishing a correspondence between the current metadata information unit and a segmented region to which the one mapping point belongs; or when there are a plurality of mapping points, establishing a correspondence between the current metadata information unit and at least one segmented region to which the plurality of mapping points belong (Paragraphs 0121-0126, metadata vectors of transformation parameters are extracted for a corresponding viewport image while a viewport image data is extracted from the segments of the spherical image based on determined central coordinates corresponding to the field-of-view of a user and a correspondence between the metadata vectors and the viewport image of the segmented region of the spherical image enables the viewport image to be tone-mapped according to the parameters of the rendering display device). 10. As to claim 7, Kunkel discloses wherein the performing tone mapping on the pixel value of the first pixel based on the one or more target metadata information units, to obtain the target tone mapping value of the first pixel comprises: obtaining one or more tone mapping curves (transfer function parameters) based on the one or more target metadata information units; when there is only one tone mapping curve, performing tone mapping on the pixel value of the first pixel based on the one tone mapping curve, to obtain the target tone mapping value; or when there are a plurality of tone mapping curves, separately performing tone mapping on the pixel value of the first pixel based on the plurality of tone mapping curves, to obtain a plurality of tone median values of the first pixel; and obtaining the target tone mapping value based on the plurality of tone median values (Paragraphs 0074-0078, 0121-0122, 0126, 0133-0137, 0140, 0143, 0153-0158, per-view image transfer function parameters used by the display rendering device are extracted based on the metadata information contained within the metadata information map corresponding to the area of the spherical image mapped to the center coordinate of the viewport image where the tone mapping of the pixels of the viewport image is performed using the per-view transfer function parameters corresponding to a single tone mapping operation). 11. As to claim 14, Kunkel discloses a terminal device (computing device shown in Figure 9) comprising: a processor (904); and a memory (906-910) coupled to the processor to store instructions, which when executed by the processor, cause the processors to perform operations, the operations comprising: determining one or more target metadata information units (image metadata such as transfer function parameters including minimum/maximum/mid-tone luminance values) of a first pixel from a plurality of metadata information units (as shown in spherical metadata map shown in Figure 6) obtained by parsing a bitstream (panorama video 108 generated by video stitching operations 106 of multiple individual video stream 104 as shown in Figure 1), the first pixel is any pixel in a panoramic video two-dimensional planar projection (projections and corresponding viewport images), the plurality of metadata information units correspond to a plurality of segmented regions comprised in a panoramic video three-dimensional spherical representation panoramic image (spherical image 504 as shown in Figure 6) having a mapping relationship with the panoramic video two-dimensional planar projection; and performing tone mapping on a pixel value of the first pixel based on the one or more target metadata information units, to obtain a target tone mapping value of the first pixel (Paragraphs 0039-0043, 0045-0050, 0056-0059, 0067-0068, 0072-0078, 0080, 0084-0085, 0121-0126, 0134-0137, 0149-0158, 0162, 0166-0171, 0190-0195, 0204-0210, operational processing operations as shown in Figure 8 are executed by a computing device shown in Figure 9 that includes a processor and memory that causes the processor to execute the operational processing operations and embody an image processing system shown in Figures 6-7 wherein image metadata parameters are included within and extracted from a video stream of a 3D panoramic video which correspond to pixel values of the panoramic video which are projected onto a 2D plane to form a 2D spherical image and corresponding viewport image wherein the image metadata parameters are used to form a metadata spherical map that includes regions of metadata parameters which corresponding to the regions of the spherical image and viewport image and are used by an image rendering component in order to perform tone mapping the image data of the viewport images to optimize them for rendering on a corresponding display). 12. As to claim 15, Kunkel discloses segmenting the panoramic video three-dimensional spherical representation panoramic image in a preset segmentation manner (as shown in Figure 6), to obtain the plurality of segmented regions; or segmenting the panoramic video three-dimensional spherical representation panoramic image in a segmentation manner obtained by parsing the bitstream, to obtain the plurality of segmented regions; or obtaining the plurality of segmented regions based on indication information of the plurality of segmented regions obtained by parsing the bitstream (Paragraphs 0059, 0121-0122, 0137, the spherical image is divided into a plurality of preset view angle pixels separated in the latitude and longitude ranges as shown in Figures 1 and 6). 13. As to claim 16, Kunkel discloses the plurality of segmented regions are obtained by segmenting the panoramic video three-dimensional spherical representation panoramic image based on a preset angle of view separation rule; or the plurality of segmented regions are obtained by segmenting the panoramic video three-dimensional spherical representation panoramic image in a latitude direction; and/or the plurality of segmented regions are obtained by segmenting the panoramic video three-dimensional spherical representation panoramic image in a longitude direction (Paragraphs 0059, 0121-0122, 0137, the spherical image is divided into a plurality of preset view angle pixels separated in the latitude and longitude ranges as shown in Figures 1 and 6). 14. As to claim 18, Kunkel discloses determining a correspondence between the plurality of metadata information units and the plurality of segmented regions (correspondence between projection 608 of metadata spherical map 602 and projection 604 of spherical video image), wherein one metadata information unit corresponds to one or more segmented regions; determining one or more target segmented regions based on a specified mapping point (center coordinate of viewport image); and when there is only one target segmented region, determining a metadata information unit corresponding to the one target segmented region as a target metadata information unit; or when there are a plurality of target segmented regions, determining metadata information units respectively corresponding to the plurality of target segmented regions as a plurality of target metadata information units (Paragraphs 0121-0126, 0132-0137, 0143, correspondences between regions within the projections of the spherical image and the spherical metadata map are determined along with a center coordinate of a viewport image in order to determine appropriate metadata transformation parameters to transform the pixel data of a viewport image to be displayed where one target segmented region of the spherical image corresponding to the viewport image produces metadata corresponding to the target segmented region of the spherical image). 15. As to claim 19, Kunkel discloses extracting a current metadata information unit from the plurality of metadata information units in a first preset sequence (bottom sequence shown in Figure 6A); extracting a current segmented region from the plurality of segmented regions in a second preset sequence (top sequence shown in Figure 6A); and establishing a correspondence between the current segmented region and the current metadata information unit (Paragraphs 0121-0126, metadata vectors of transformation parameters are extracted for a corresponding viewport image while a viewport image data is extracted from the segments of the spherical image based on determined central coordinates corresponding to the field-of-view of a user and a correspondence between the metadata vectors and the viewport image of the segmented region of the spherical image enables the viewport image to be tone-mapped according to the parameters of the rendering display device). 16. As to claim 20, Kunkel discloses obtaining one or more tone mapping curves (transfer function parameters) based on the one or more target metadata information units; when there is only one tone mapping curve, performing tone mapping on the pixel value of the first pixel based on the one tone mapping curve, to obtain the target tone mapping value; or when there are a plurality of tone mapping curves, separately performing tone mapping on the pixel value of the first pixel based on the plurality of tone mapping curves, to obtain a plurality of tone median values of the first pixel; and obtaining the target tone mapping value based on the plurality of tone median values (Paragraphs 0074-0078, 0121-0122, 0126, 0133-0137, 0140, 0143, 0153-0158, per-view image transfer function parameters used by the display rendering device are extracted based on the metadata information contained within the metadata information map corresponding to the area of the spherical image mapped to the center coordinate of the viewport image where the tone mapping of the pixels of the viewport image is performed using the per-view transfer function parameters corresponding to a single tone mapping operation). Claims 17. Claims 4 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 18. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL S OSINSKI whose telephone number is (571) 270-3949. The examiner can normally be reached on Monday - Friday, 10:00am - 6:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MO /MICHAEL S OSINSKI/Primary Examiner, Art Unit 2674 2/5/2026
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Jan 29, 2024
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596951
MULTISCALE CONTIGUOUS BLOCK PIXEL ENTANGLER FOR IMAGE RECOGNITION ON HYBRID QUANTUM-CLASSICAL COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12586351
STORAGE MEDIUM, SPECIFYING METHOD, AND INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579657
IMAGING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573028
NEURAL NETWORK FOR IMAGE REGISTRATION AND IMAGE SEGMENTATION TRAINED USING A REGISTRATION SIMULATOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554796
OPTIMIZING PARAMETER ESTIMATION FOR TRAINING NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
98%
With Interview (+23.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month