Prosecution Insights
Last updated: April 19, 2026
Application No. 18/030,429

Image Processing Method, Computing System, Device and Readable Storage Medium

Non-Final OA §103
Filed
Apr 05, 2023
Examiner
ESQUINO, CALEB LOGAN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
11 granted / 16 resolved
+6.8% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 4th, 2026 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation Claim 1 includes the limitation “following the network modules in the attention network, there is a bilinear processing layer and a self-residual network…” This claim language could be interpreted to mean that the bilinear processing layer and self-residual network are contained within the attention network, or that just the network modules are contained within the attention network. Claim 2 includes the claim language “the method according to claim 1, wherein the mapping network comprises a fist convolution network, the self-residual network” which implies that the self-residual network defined in the first claim is contained within the mapping network. Since claim 1 defines the mapping network and attention network separately, it can be assumed that the self-residual network is not contained within the attention network. This leads the examiner to believe that the self-residual network and bilinear processing layer are not contained within the attention network, and will therefore be interpreted as such. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 10-14 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over “Deep SR-ITM: Joint Learning of Super-Resolution and Inverse Tone-Mapping for 4K UHD HDR Applications” (herein after referred to by its primary author, Kim) in view of US20210142440 (herein after referred to by its primary author, Ahn), US20210365724 (herein after referred to by its primary author, Lee), and “Bilinear and bicubic interpolation methods for division of focal plane polarimeters” (herein after referred to by its primary author, Gao). In regards to claim 1, Kim teaches an image processing method, comprising: using an inverse tone mapping neural network to process a first image, wherein the inverse tone mapping neural network is configured to expand a dynamic range of the first image and a color gamut range of the first image to obtain a second image that is expanded, (Kim Section 3 “We propose Deep SR-ITM, a deep residual network based on signal decomposition and modulations, where an HR HDR image in the HDR display format of BT.2020 [2] and PQ-OETF [3] is generated from a single LR SDR image. Our network architecture is shown in Fig. 3.”) the inverse tone mapping neural network comprises a mapping network, the mapping network is used to realize the expansion, (Kim Figure 3 Base Layer to “10 ResBlocks”) and the inverse tone mapping neural network further comprises an attention network (Kim Figure 3 “SMFb” network;), an input of the mapping network and an input of the attention network are both the first image, (Kim Figure 3 Description “The input LR-SDR image (aqua blue box) is concatenated with its guided filter decompositions (base and detail layer) before entering the network.”) the attention network is used to process image contents of the first image to generate correction coefficients, and the correction coefficients are used to correct parameters of the mapping network. (Kim Figure 3 “SMFb”; Section 3.2 “Therefore, we introduce spatially-variant and image-adaptive modulations by element-wise multiplication, to aid the network in modelling more complex mappings, than can be modelled by simple CNNs. Operation-wise, this is similar to attention blocks (actually, a generalization of spatial channel attention) in high level vision tasks, such as object detection and classification.” “Secondly, the ResModBlock (green box) has an additional modulation component. It requires the shared modulation features (SMFb) of the base layer given by [Equation 4]” Examiner note: The modulation component is analogous to the correction coefficients of the present disclosure, as they are both used to correct parameters of the network, similar to attention.), the attention network comprises several network modules, each of the network modules comprises a convolution layer and an activation function which are after the convolution layer (Kim Figure 3 “SMFb” Network Examiner note: This network is composed of 3 modules, each containing one convolution operation (yellow) and one activation operation (blue)), and following the network modules in the attention network, there is a Kim Figure 3 “Bicubic Interpolation”) and a self-residual network, a processing path of the self-residual network comprises the convolution layer (Kim Figure 3 1st and 2nd ResModBlock Examiner note: The ResBlock of this reference is defined to include 2 activation functions and 2 convolutions function, as well as a skip connection). Kim fails to teach each of the attention network modules comprises a maximum pooling layer and an instance normalization layer which are after the convolution layer, and a bilinear processing layer. However, Ahn teaches an attention network module comprising an instance normalization layer which is after the convolution layer (Ahn Figure 30; Paragraph [0428] “Instance normalization, residual connection, and convolution layer follow the attention layer to generate output feature map z.sub.xy. The image attention block offers a direct mechanism of transferring information from multiple target images to the pose of driver.”) Ahn is considered to be analogous to the claimed invention because they are both in the same field of convolutional neural networks. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Kim to include the teachings of Ahn to provide the advantage of preserving details of the target image (Ahn Paragraph [403] “To overcome such problems, we introduce components that address the mentioned problem: image attention block, target feature alignment, and landmark transformer. Through attending and warping the relevant features, the proposed architecture, called MarioNETte, produces high-quality reenactments of unseen identities in a few-shot setting. In addition, the landmark transformer dramatically alleviates the identity preservation problem by isolating the expression geometry through landmark disentanglement. Comprehensive experiments are performed to verify that the proposed framework can generate highly realistic faces, outperforming all other baselines, even under a significant mismatch of facial characteristics between the target and the driver.” Examiner note: The image attention block assists in preserving the identity of the target image, which would be necessary when performed super resolution and inverse tone mapping, as the original image features should be preserved, but upscaled in terms of resolution and color range.) Lee teaches an attention network module comprising a maximum pooling layer which is after the convolution layer. (Lee Figure 5; Paragraph [0089] “Referring to FIG. 8, an object detection method according to an embodiment of the present disclosure may include performing maximum pooling and average pooling on a convolutional feature map (S801), combining the maximum pooling feature map and the average pooling feature map (S803), obtaining an attention map by passing through a nonlinear function (S805), multiplying the attention map and the convolutional feature map (S807) and performing a binary classification on the multiplied result to generate the mask (S809).” Examiner note: This reference applies max pooling to a convolutional feature map, which is then passed through a nonlinear function to obtain an attention map.) Lee is considered to be analogous to the claimed invention because they are both in the same field of convolutional neural networks. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Kim in view of Ahn to include the teachings of Lee to provide the advantage of reduced feature map dimensionality, allowing for more efficient computation (Lee Figure 5 “max pooled features” Examiner note: It is known in the art that max pooling reduces the dimensionality of a feature map, thus allowing for less data to handle. For more information, see Murray “Generalized Max Pooling”.) Furthermore, Gao teaches that bilinear and bicubic interpolation are comparable interpolation methods (Gao Abstract “This paper presents bilinear and bicubic interpolation methods tailored for the division of focal plane polarization imaging sensor.”; Figure 8 (b) and (d)). Kim in view of Ahn and Lee teaches the claimed device, with a substitution of the bilinear processing layer with a bicubic processing layer. Gao teaches that both of these components and their functions are known in the art (Gao Section 2.1 “The bilinear interpolation methods are among the most common techniques used in image processing due to their computational simplicity.”; Section 2.4 “The bicubic interpolation method attempts to fit a surface between four corner points using a third order polynomial function [34].”). Furthermore, Gao shows that one of ordinary skill in the art could have substituted bicubic interpolation for bilinear interpolation, and this substitution would lead to the predictable result of lower computational requirements at the cost of lowered accuracy for interpolation, leading to pixelization (Gao Section 4,1 “From this set of images, it can be observed that both degree and angle of linear polarization images for bilinear and weighted bilinear interpolation have stronger pixellation effects compared to the true polarization images (see region B in the AoP image). Details of the horsehair are lost, and the boundary between the horsehair and the background is discrete due to the large error introduced by the bilinear interpolation method (see region A in the AoP image). For the bicubic and bicubic spline interpolation images the details of the horsehair in both degree and angle of linear polarization are recovered with similar results and closely resemble the true polarization images.”; Section 4.3 “First, bicubic spline interpolation shows the best results in the visual comparison, and the numerical error comparison via MSE confirms this observation. The algorithm complexity, as well as the large window area used to compute the interpolated images, allow for higher accuracy in the recovered images. The algorithm complexity introduces higher computational workload compared to the two bilinear interpolation methods.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to apply the teachings of Gao to the device of Kim in view of Ahn and Lee, to substitute bicubic interpolation for bilinear interpolation. In regards to claim 2, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 1, wherein the mapping network comprises a first convolution network, the self-residual network and a second convolution network, the mapping network being used to realize the expansion comprises: using the first convolution network to process the first image, to obtain a first feature map; (Kim Figure 3 Examiner note: The 1st ResBlock is considered to be analogous to the first convolution network, and the output of that block is analogous to the first feature map.) using the self-residual network to process the first feature map, to obtain a second feature map; (Kim Figure 3 Examiner note: The 1st and 2nd ResModBlock, with the 2nd ResBlock in between is considered to be analogous to the self-residual network, and the output of those blocks is analogous to the second feature map.) and using the second convolution network to process the second feature map, to obtain a third feature map, (Kim Figure 3 Examiner note: The 3rd ResBlock and ResModBlock are considered to be analogous to the second convolution network, and the output of those blocks is analogous to the third feature map.) wherein the third feature map is used as the second image (Kim Figure 3 “FEb”), wherein the correction coefficients are used to correct parameters of the self-residual network. (Kim Figure 3 “SMFb” Examiner note: SMFb supplies modulation to the 1st and 2nd ResModBlocks, the modulation is analogous to the correction coefficients of the attention network.) In regards to claim 3, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 2, wherein the self-residual network comprises m self-residual sub-networks that are connected in sequence, m is an integer greater than 1, (Kim Figure 3 1st and 2nd ResModBlock) using the self-residual network to process the first feature map, comprises: using a first processing path and a second processing path of a first self-residual sub-network in the self-residual network to separately process the first feature map that is received to obtain a first residual feature map (Kim Figure 3 1st ResModBlock Examiner note: The figure shows that this block performs 2 ReLU’s and 2 convolutions, and has a “residual” line, which is analogous to the first and second processing path); using a first processing path and a second processing path of an i-th self-residual sub-network in the self-residual network to separately process an (i-1)-th residual feature map that is obtained by an (i-1)-th self-residual sub-network to obtain an i-th residual feature map, wherein i is an integer greater than 1 and less than or equal to m, (Kim Figure 3 2nd ResModBlock) the first processing path comprises a residual convolution layer, and the second processing path is used to skip processing of the residual convolution layer. (Kim Figure 3 Examiner note: Both ResModBlocks contained in the self-residual network have two paths, the first with 2 ReLU’s and 2 convolutions, and the second is a “residual line” which is used to skip the ReLU’s and convolutions.) In regards to claim 4, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 3, wherein a number of feature layers of each residual feature map is n, n is a positive integer, and the attention network processes the image contents of the first image to obtain a coefficient feature map with a number of feature layers of nxm, (Kim Section 3.2 “The modulation component then goes through additional layers that are not shared with other ResModBlocks, to account for the difference depending on the depth of each block.” Section 4.3 “We refer to the modulation features that are multiplied to the main branch feature maps at each modulation block as modulation maps.” Examiner note: The first reference shows that each modulation component (which is analogous to the correction coefficient) is different for each ResModBlock. The second reference shows that each of the modulation components is a feature map. Therefore, the modulation components passed to the ResModBlock must have at least n features (since n is any integer), and that there must be at least m (number of ResModBlocks) different feature maps since each ResModBlock gets a unique modulation component) the coefficient feature map is taken as the correction coefficients and is used to multiply with the residual feature map to correct parameters of the self- residual network (Kim Figure 3 ResModBlock multiplication component comes from SMFb). In regards to claim 5, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 2, wherein the first convolution network comprises a first convolution layer and an activation function (Kim Figure 3 1st ResBlock Examiner note: This ResBlock comprises at least one convolution layer and an activation function in the form of ReLU), and the second convolution network comprises a second convolution layer (Kim Figure 3 3rd ResBlock Examiner note: This ResBlock comprises at least one convolution layer). In regards to claim 6, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 1, wherein the first image is an image with a standard dynamic range, and the second image is an image with a high dynamic range. (Kim Section 3 “an HR HDR image in the HDR display format of BT.2020 [2] and PQ-OETF [3] is generated from a single LR SDR image”; Section 3.3 “The later parts of the Deep SR-ITM consist of fusing the features of the base layer and the detail layer (FEb and FEd), and finally producing the HR HDR output.”) In regards to claim 10, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 1, wherein the inverse tone mapping neural network is trained by using a content loss function. (Kim Section 4.1 “We used the L2 loss, Adam [29] optimizer and Xavier initialization method [30] for training.” Examiner note: L2 loss is an example of a content loss function, as described in Paragraph [0093] of the present applications specification.) In regards to claim 11, Kim in view of Ahn, Lee, and Gao teaches a computing system for image processing, comprising: one or more processors; and one or more non-transitory computer-readable medium for storing instructions, wherein the instructions, when executed by the one or more processors, cause the one or more processors to perform operations, and the operations comprise (Kim Section 1 “Therefore at the terminal end, it is necessary to convert the FHD SDR videos to 4K/8K UHD HDR in order to display them on the premium displays.” Examiner note: Kim teaches that their method is performed at the ”terminal end”, which means the point of interaction with a computer, which is used to execute commands and control the system. It is known in the art that a computer terminal has a processor and a RAM, which is a form of non-transitory computer-readable medium) and renders obvious the remaining claim limitations as in the consideration of claim 1. In regards to claim 12, Kim in view of Ahn, Lee, and Gao renders obvious the claim limitations as in the consideration of claim 2. In regards to claim 13, Kim in view of Ahn, Lee, and Gao renders obvious the claim limitations as in the consideration of claim 3. In regards to claim 14, Kim in view of Ahn, Lee, and Gao renders obvious the claim limitations as in the consideration of claim 4. In regards to claim 18, Kim in view of Ahn, Lee, and Gao renders obvious the claim limitations as in the consideration of claim 6 and 10. In regards to claim 19, Kim in view of Ahn, Lee, and Gao renders obvious the claim limitations as in the consideration of claim 11. In regards to claim 20, Kim in view of Ahn, Lee, and Gao renders obvious the claim limitations as in the consideration of claims 1 and 11. Claims 7-9 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Ahn, Lee, and Gao, and further in view of “A Multi-purpose Convolutional Neural Network for Simultaneous Super-Resolution and High Dynamic Range Image Reconstruction” (herein after referred to by its primary author, Kim2) and US9691133 (herein after referred to by its primary author, Liu). In regards to claim 7, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 1, but fails to teach inputting the second image into an enhancement processing network for processing to obtain an enhanced second image, wherein the enhancement processing network comprises a noise reduction network and/or a color mapping network. However, Kim2 teaches inputting the second image into an enhancement processing network for processing to obtain an enhanced second image (Kim2 Figure 3(b)) Kim in view of Ahn, Lee, and Gao contains a “base” device, which this device could be seen as an improvement on by including a super resolution network before or after the inverse tone mapping. Kim2 contains a comparable device, as it performs both super resolution and inverse tone mapping, and shows that the super resolution and inverse tone mapping could be performed in any order. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve the device of Kim in view of Ahn, Lee, and Gao by including the teachings of Kim2, to achieve the predictable result of a device that can both increase the resolution and quality of an image, and can perform those operations in any order. (Kim2 Section3 “We propose a CNN-based architecture for joint SR and ITM. Our network performs the individual tasks as well as the joint task,”) Furthermore, Liu teaches an enhancement processing network that comprises a noise reduction network (Liu Figure 3; Column 3 Line 25 “In FIG. 3, module 26 uses the current low resolution frame of image data, the next most future frame of low resolution data and the motion vectors between the two to produce a first version of a noise reduced current frame of image data, CF_LR″. ”) Kim in view of Ahn, Lee, Gao, and Kim2 contains a “base” device, which this device could be seen as an improvement on by including a super resolution network before or after the inverse tone mapping. Liu contains a known technique of using multiple frames of a video to gather more information when performing super-resolution. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve the device of Kim in view of Ahn, Lee, Gao, and Kim2to yield the predictable result of improved super resolution results through the use of more video data (Liu Column 1 Line 8 “Super resolution techniques include multi-frame techniques. Multi-frame super resolution (MFSR) techniques fuse several low resolution (LR) frames. It can deliver rich details and makes the video more stable in the temporal domain. As a byproduct, MFSR can reduce noise to some extent.”) In regards to claim 8, Kim in view of Ahn, Lee, and Gao teaches the method according to claim 1, wherein the first image is a k-th frame of image in a video, and the second image is represented as an expanded k-th frame of image, k is an integer greater than 1, (Kim Section 1 “In this paper, we aim to tackle the joint super-resolution (SR) and inverse tone-mapping (ITM) problem, where low-resolution (LR) SDR video can be directly converted into high–resolution (HR) HDR video.”). Kim in view of Ahn, Lee, and Gao does not teach using the inverse tone mapping neural network to process a (k-1)-th frame of image and a (k+1)-th frame of image in the video, respectively, to obtain an expanded (k-1)-th frame of image and an expanded (k+1)-th frame of image; and using a super-resolution network to process the expanded k-th frame of image, the expanded (k-1)-th frame of image and the expanded (k+1)-th frame of image, to obtain a super- resolution k-th frame of image, wherein a resolution of the super-resolution k-th frame of image is higher than a resolution of the first image. However, Kim2 teaches using a super-resolution network to process the expanded k-th frame of image, to obtain a super- resolution k-th frame of image, wherein a resolution of the super-resolution k-th frame of image is higher than a resolution of the first image. (Kim2 Figure 3(b) Examiner note: This figure shows that a frame could first be expanded using the ITM network, then passed into the SR network.) Kim in view of Ahn, Lee, and Gao contains a “base” device, which this device could be seen as an improvement on by including a super resolution network before or after the inverse tone mapping. Kim2 contains a comparable device, as it performs both super resolution and inverse tone mapping, and shows that the super resolution and inverse tone mapping could be performed in any order. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve the device of Kim in view of Ahn, Lee, and Gao by including the teachings of Kim2, to achieve the predictable result of a device that can both increase the resolution and quality of an image, and can perform those operations in any order. (Kim2 Section3 “We propose a CNN-based architecture for joint SR and ITM. Our network performs the individual tasks as well as the joint task,”) Furthermore, Liu teaches using a super-resolution network to process the (k+1)-th frame of image, to obtain a super- resolution k-th frame of image, wherein a resolution of the super-resolution k-th frame of image is higher than a resolution of the first image. (Liu Column 2 Line 56 “The super resolution module then receives the noise reduced current frame (CF_LR′), the motion vectors, the image data of noise reduced previous frames (P1_LR′, P2_LR′) and unprocessed future frames (F1_LR, F2_LR) to produce a current frame of super resolution data” Examiner note: This method takes in the current frame, and the next and previous frame to realize the super-resolution of the current frame.) Kim in view of Ahn, Lee, Gao, and Kim2 contains a “base” device, which this device could be seen as an improvement on by including a super resolution network before or after the inverse tone mapping. Liu contains a known technique of using multiple frames of a video to gather more information when performing super-resolution. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve the device of Kim in view of Ahn, Lee, Gao, and Kim2 to yield the predictable result of improved super resolution results through the use of more video data (Liu Column 1 Line 8 “Super resolution techniques include multi-frame techniques. Multi-frame super resolution (MFSR) techniques fuse several low resolution (LR) frames. It can deliver rich details and makes the video more stable in the temporal domain. As a byproduct, MFSR can reduce noise to some extent.”) Kim in view of Ahn, Lee, Gao, Kim2, and Liu teaches using the inverse tone mapping neural network to process a (k-1)-th frame of image and a (k+1)-th frame of image in the video, respectively, to obtain an expanded (k-1)-th frame of image and an expanded (k+1)-th frame of image; and using a super-resolution network to process the expanded k-th frame of image, the expanded (k-1)-th frame of image and the expanded (k+1)-th frame of image, to obtain a super- resolution k-th frame of image. (Kim2 Figure 3(b) Examiner note: In this figure, we use the super resolution method of Liu to process the LR-HDR image to an HR-HDR image. Since Liu requires the next and previous frames for performing super resolution, the next and previous frames would have been passed through the ITM network of Kim2 producing an expanded next and previous frames. Then those expanded next and previous frames would be processed using the SR network of Liu to produce a super resolution of the current frame.) In regards to claim 9, Kim in view of in view of Ahn, Lee, Gao, Kim2, and Liu teaches the method according to claim 8, further comprising: using the super-resolution network to process a k-th frame of image, a (k-1)-th frame of image and a (k+1)-th frame of image in the video to obtain a super-resolution k-th frame of image (Liu Column 2 Line 56 “The super resolution module then receives the noise reduced current frame (CF_LR′), the motion vectors, the image data of noise reduced previous frames (P1_LR′, P2_LR′) and unprocessed future frames (F1_LR, F2_LR) to produce a current frame of super resolution data” Examiner note: This method takes in the current frame, and the next and previous frame to realize the super-resolution of the current frame.), the super-resolution k-th frame of image is taken as the first image, wherein a resolution of the first image is higher than a resolution of the k-th frame of image, k is an integer greater than 1. (Kim2 Figure 3(a) Examiner note: This figure shows that the super resolution can be performed first, then the inverse tone mapping is performed on the super resolution image.) In regards to claim 15, Kim in view of in view of Ahn, Lee, Gao, Kim2, and Liu renders obvious the claim limitations as in the consideration of claim 7. In regards to claim 16, Kim in view of in view of Ahn, Lee, Gao, Kim2, and Liu renders obvious the claim limitations as in the consideration of claim 8. In regards to claim 17, Kim in view of in view of Ahn, Lee, Gao, Kim2, and Liu renders obvious the claim limitations as in the consideration of claim 9. Response to Arguments Applicant’s arguments with respect to independent claims 1, 11, and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: “Deep HDR Hallucination for Inverse Tone Mapping” teaches a method of performing inverse tone mapping using an encoder/decoder UNet with skip connections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALEB LOGAN ESQUINO whose telephone number is (703)756-1462. The examiner can normally be reached M-Fr 8:00AM-4:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CALEB L ESQUINO/ Examiner, Art Unit 2677 /ANDREW W BEE/ Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Apr 05, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Sep 04, 2025
Response Filed
Nov 13, 2025
Final Rejection — §103
Feb 04, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602924
Method for Semantic Localization of an Unmanned Aerial Vehicle
2y 5m to grant Granted Apr 14, 2026
Patent 12602813
DEEP APERTURE
2y 5m to grant Granted Apr 14, 2026
Patent 12541857
SYNTHESIZING IMAGES FROM THE PERSPECTIVE OF THE DOMINANT EYE
2y 5m to grant Granted Feb 03, 2026
Patent 12530787
TECHNIQUES FOR DIGITAL IMAGE REGISTRATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518425
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+41.7%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month