Prosecution Insights
Last updated: April 19, 2026
Application No. 18/275,164

VIDEO PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jul 31, 2023
Examiner
BARRY, STEVEN DANIEL
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Honor Device Co., Ltd.
OA Round
3 (Non-Final)
96%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 96% — above average
96%
Career Allow Rate
22 granted / 23 resolved
+33.7% vs TC avg
Minimal +5% lift
Without
With
+5.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
11 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
33.0%
-7.0% vs TC avg
§112
0.9%
-39.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 01/09/2026 have been fully considered but they are not persuasive. Applicant believed the incorporation of subject matter from Claim 2 into independent Claims 1, 8, & 9 made those claims allowable. However, Applicant did not incorporate any of the portion of Claim 2 cited as allowable in the previous Office Action. The material from Claim 2 that the Applicant added, “wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame,” is rejected by Kanbara et al (CN 108141531 B, hereinafter, “Kanbara”), pg. 25, para. 9, ln. 1-7, "In addition, in the case of using multiple frames of image data with different acquisition time to detect the known mobile vector is also the same. the control unit 34 in the detection area for detecting the motion vector is mixed with the image data obtained by the first imaging condition and the image data obtained by the second imaging condition, for detecting the image data of the second imaging condition in the image data of the detection area of the motion vector; The correction process as described above [Example 1] ~ [Example 3]. Then, the control unit 34 uses the image data after correction processing to detect the motion vector." This teaches the obtaining a video shot through a camera lens comprises: alternately obtaining a video image of a first exposure frame and a video image of a second exposure frame, wherein a time for which the video image of a first exposure frame is exposed is greater than a time for which the video image of a second exposure frame is exposed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5, 8, 9, 11, 13, 16, 18, 20, & 21 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida (JP 2013219685 A, hereinafter, "Yoshida") in view of Wang et al (US 11057572 B1, hereinafter, "Wang"), Kanbara, and Kang et al (US 20190052790 A1, hereinafter, "Kang"). Regarding Claim 1, Yoshida teaches A video processing method, comprising: determining a video style template among a plurality of video style templates (Yoshida, pg. 4, para. 1, ln. 1, "The first LUT 106 and the second LUT 107…"), wherein each video style template is corresponding to a preset look up table (LUT) (Yoshida, Fig. 3, pg. 5, para. 4, ln. 10-12, "The image processing engine A 121 generates 4k2k moving image frames at 24 fps, and the second path setting unit 105 sequentially supplies these moving image frames to the first LUT 106." The first LUT corresponds to the 4k2k style. Continuing to Fig. 4, pg. 6, para. 4, ln. 4-6, "…the third path setting unit 110 inserts the second LUT 107 between the resizing unit 244 and the matrix unit 245 as a lookup table for realizing view assist." The second LUT corresponds to the view assist video style.) with the determined video style template corresponding to a first LUT (Yoshida, Fig. 1, pg. 4, para. 7, ln. 1-4, "…the YUV signals output from the image processing engine A 121 and the image processing engine B 122 are respectively supplied to the second path setting unit 105. The second path setting unit 105 controls the path of the YUV signal supplied from each image processing engine under the control of the controller 130, and the YUV signal is converted into the first LUT 106…"); processing the video by using a logarithm (LOG) curve corresponding to a current photosensitivity ISO of the camera lens, to obtain a LOG video (Yoshida, Fig. 6, pg. 5, para. 1, ln. 1-6, "The input / output characteristics of Log gamma are defined by numerical tables and functions on the camera side. The purpose of the gradation conversion processing by Log gamma is to provide images with a wide dynamic range while maintaining high gradation properties on the premise of post-processing [called post-production processing] on the editor side. Therefore, as shown in FIG. 6, the dark portion is slightly offset to prevent blackout, the intermediate luminance portion maintains the gradation with the minimum number of bits, and the bright portion suppresses overexposure until saturation." This teaches processing the video by using the LOG curve in Fig. 6 corresponding to a current photosensitivity ISO of the camera lens to obtain a LOG video.). Yoshida does not teach obtaining a video shot through a camera lens, the video including a video image of a first exposure frame and a video image of a second exposure frame; snapping, a corresponding image in the video as a snapped image in response to a snapping instruction, wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame; processing the video by using a logarithm (LOG) curve corresponding to a current photosensitivity ISO of the camera lens, to obtain a LOG video; processing the snapped image by using the LOG curve corresponding to the current photosensitivity ISO of the camera lens, to obtain a LOG-based snapped image; processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template; and processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template. However, Wang teaches obtaining a video shot through a camera lens (Wang, [0010], ln. 1-4, "…these imaging assemblies may include one or more optical elements, such as an assembly of one or more lenses [e.g., a lens assembly] that focus incident light onto an imaging sensor disposed at a corresponding imaging plane [e.g., an array of sensing elements formed within a semiconductor substrate]."). Kanbara teaches the video including a video image of a first exposure frame and a video image of a second exposure frame (Kanbara, Fig. 1, pg. 28, para. 7, ln. 1-3, "The camera 1 with image processing device has: a control unit 34 (setting part 34b), which sets the imaging condition of the first area of the imaging unit 32 to be different from the imaging condition of the second area of the imaging unit 32"); wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame (Kanbara, pg. 25, para. 9, ln. 1-7, "In addition, in the case of using multiple frames of image data with different acquisition time to detect the known mobile vector is also the same. the control unit 34 in the detection area for detecting the motion vector is mixed with the image data obtained by the first imaging condition and the image data obtained by the second imaging condition, for detecting the image data of the second imaging condition in the image data of the detection area of the motion vector; The correction process as described above [Example 1] ~ [Example 3]. Then, the control unit 34 uses the image data after correction processing to detect the motion vector." This teaches the obtaining a video shot through a camera lens comprises: alternately obtaining a video image of a first exposure frame and a video image of a second exposure frame, wherein a time for which the video image of a first exposure frame is exposed is greater than a time for which the video image of a second exposure frame is exposed.). Kang teaches processing the snapped image by using the LOG curve corresponding to the current photosensitivity ISO of the camera lens, to obtain a LOG-based snapped image (Kang, Fig. 5a, [0143], ln. 1-4, "…For example, when the synthesized image is a 14-bit-depth image containing dynamic range information of 14 stops, as shown in FIG. 5[a], an 8-bit-depth log image including dynamic range information of 14 stops can be generated by applying a log profile whose transform function has a log type." This teaches using the LOG curve in Fig. 5a corresponding to the current ISO of the camera lens to obtain a LOG-based image.); processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style and an image is a one-frame video. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.); and processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Wang, Kanbara, and Kang with those of Yoshida because it is well known in the art to obtain a video shot through a camera lens. It is well known in the art to include a video image of a first and second exposure frame. It is well known in the art to snap an image in a video as a snapped image in response to a snapping instruction. It is well known in the art to fuse a plurality of video image frames into an image based on a reference frame. It is well known in the art to process a video with a LOG curve corresponding to a photosensitivity ISO of a camera lens to obtain a LOG video. It is well known in the art to process a LOG video based on a first LUT corresponding to a determined video style template to obtain a video corresponding to said template. It is well known in the art to process a LOG-based snapped image based on a first LUT corresponding to the determined video style template to obtain a snapped image corresponding to said video style template. Regarding Claim 3, Yoshida, Wang, Kanbara, and Kang teach the limitations of Claim 1 as noted above. Wang teaches a resolution of the video shot through the camera lens is equal to a resolution of the snapped image (Wang, [0055], ln. 8-9, "…the resolution for such picture or video is approximately 4000×2000 pixels or approximately 8 MP."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Wang with those of Yoshida, Wang, Kanbara, and Kang because it is well known in the art to have a resolution of a video equal to the resolution of a snapped image. Regarding Claim 5, Yoshida, Wang, Kanbara, and Kang teach the limitations of Claim 1 as noted above. Kang teaches in a first video processing procedure, a process of processing, by using a logarithm LOG curve, the video shot through the camera lens, to obtain a LOG video, and a process of processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template are performed (Kang, Fig. 2, [0131], ln. 1-3, "In a first mode, a log image generator 212 can store a log image obtained by applying a log profile to the synthesized image data. The log image includes high contrast ratio information by applying a logarithmic function to the image data." Continuing to [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table [LUT] corresponding to the first filter to the synthesized image."); the video processing method further comprises a second video processing procedure, and the second video processing procedure comprises: processing, by using the logarithm LOG curve, the video shot through the camera lens, to obtain a LOG video (Kang, Fig. 5a, [0143], ln. 1-4, "when the synthesized image is a 14-bit-depth image containing dynamic range information of 14 stops, as shown in FIG. 5[a], an 8-bit-depth log image including dynamic range information of 14 stops can be generated by applying a log profile whose transform function has a log type." This teaches using the log curve in Fig. 5a to obtain a log video.); and processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template (Kang, [0181], ln. 4-5, "The default lookup table may be a lookup table for applying tone mapping to the log image without color grading."); and the video processing method further comprises: storing the video corresponding to the determined video style template in the first video processing procedure (Kang, Fig. 1A, [0132], ln. 2-3, "Further, the controller 180 can store the generated log image in the storage unit 170."); and previewing the video corresponding to the determined video style template in the second video processing procedure (Kang, Fig. 8, [0167], ln. 3, "…the controller 180 can display a log image 910 on a preview screen."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Kang with those of Yoshida, Wang, Kanbara, and Kang because it is well known in the art to use a LOG curve to obtain a LOG video and process it based on a LUT corresponding to a video style template to obtain a video corresponding to said template. It is well known in the art to store and preview videos. Regarding Claim 8, Yoshida teaches an electronic device, comprising: determining a video style template among a plurality of video style templates (Yoshida, pg. 4, para. 1, ln. 1, "The first LUT 106 and the second LUT 107…"), wherein each video style template is corresponding to a preset look up table (LUT) (Yoshida, Fig. 3, pg. 5, para. 4, ln. 10-12, "The image processing engine A 121 generates 4k2k moving image frames at 24 fps, and the second path setting unit 105 sequentially supplies these moving image frames to the first LUT 106." The first LUT corresponds to the 4k2k style. Continuing to Fig. 4, pg. 6, para. 4, ln. 4-6, "…the third path setting unit 110 inserts the second LUT 107 between the resizing unit 244 and the matrix unit 245 as a lookup table for realizing view assist." The second LUT corresponds to the view assist video style.) with the determined video style template corresponding to a first LUT (Yoshida, Fig. 1, pg. 4, para. 7, ln. 1-4, "…the YUV signals output from the image processing engine A 121 and the image processing engine B 122 are respectively supplied to the second path setting unit 105. The second path setting unit 105 controls the path of the YUV signal supplied from each image processing engine under the control of the controller 130, and the YUV signal is converted into the first LUT 106…"); processing the video by using a logarithm (LOG) curve corresponding to a current photosensitivity ISO of the camera lens, to obtain a LOG video (Yoshida, Fig. 6, pg. 5, para. 1, ln. 1-6, "The input / output characteristics of Log gamma are defined by numerical tables and functions on the camera side. The purpose of the gradation conversion processing by Log gamma is to provide images with a wide dynamic range while maintaining high gradation properties on the premise of post-processing [called post-production processing] on the editor side. Therefore, as shown in FIG. 6, the dark portion is slightly offset to prevent blackout, the intermediate luminance portion maintains the gradation with the minimum number of bits, and the bright portion suppresses overexposure until saturation." This teaches processing the video by using the LOG curve in Fig. 6 corresponding to a current photosensitivity ISO of the camera lens to obtain a LOG video.). Yoshida does not teach obtaining a video shot through a camera lens, the video including a video image of a first exposure frame and a video image of a second exposure frame; snapping, a corresponding image in the video as a snapped image in response to a snapping instruction, wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame; processing the video by using a logarithm (LOG) curve corresponding to a current photosensitivity ISO of the camera lens, to obtain a LOG video; processing the snapped image by using the LOG curve corresponding to the current photosensitivity ISO of the camera lens, to obtain a LOG-based snapped image; processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template; and processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template. However, Wang teaches obtaining a video shot through a camera lens (Wang, [0010], ln. 1-4, "…these imaging assemblies may include one or more optical elements, such as an assembly of one or more lenses [e.g., a lens assembly] that focus incident light onto an imaging sensor disposed at a corresponding imaging plane [e.g., an array of sensing elements formed within a semiconductor substrate]."). Kanbara teaches the video including a video image of a first exposure frame and a video image of a second exposure frame (Kanbara, Fig. 1, pg. 28, para. 7, ln. 1-3, "The camera 1 with image processing device has: a control unit 34 (setting part 34b), which sets the imaging condition of the first area of the imaging unit 32 to be different from the imaging condition of the second area of the imaging unit 32"); wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame (Kanbara, pg. 25, para. 9, ln. 1-7, "In addition, in the case of using multiple frames of image data with different acquisition time to detect the known mobile vector is also the same. the control unit 34 in the detection area for detecting the motion vector is mixed with the image data obtained by the first imaging condition and the image data obtained by the second imaging condition, for detecting the image data of the second imaging condition in the image data of the detection area of the motion vector; The correction process as described above [Example 1] ~ [Example 3]. Then, the control unit 34 uses the image data after correction processing to detect the motion vector." This teaches the obtaining a video shot through a camera lens comprises: alternately obtaining a video image of a first exposure frame and a video image of a second exposure frame, wherein a time for which the video image of a first exposure frame is exposed is greater than a time for which the video image of a second exposure frame is exposed.). Kang teaches processing the snapped image by using the LOG curve corresponding to the current photosensitivity ISO of the camera lens, to obtain a LOG-based snapped image (Kang, Fig. 5a, [0143], ln. 1-4, "…For example, when the synthesized image is a 14-bit-depth image containing dynamic range information of 14 stops, as shown in FIG. 5[a], an 8-bit-depth log image including dynamic range information of 14 stops can be generated by applying a log profile whose transform function has a log type." This teaches using the LOG curve in Fig. 5a corresponding to the current ISO of the camera lens to obtain a LOG-based image.); processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style and an image is a one-frame video. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.); and processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.). Regarding Claim 9, Yoshida teaches a computer-readable storage medium, wherein the computer- readable storage medium stores a computer program, and when the computer program is run on a computer, the computer is caused to perform a video processing method comprising: determining a video style template among a plurality of video style templates (Yoshida, pg. 4, para. 1, ln. 1, "The first LUT 106 and the second LUT 107…"), wherein each video style template is corresponding to a preset look up table (LUT) (Yoshida, Fig. 3, pg. 5, para. 4, ln. 10-12, "The image processing engine A 121 generates 4k2k moving image frames at 24 fps, and the second path setting unit 105 sequentially supplies these moving image frames to the first LUT 106." The first LUT corresponds to the 4k2k style. Continuing to Fig. 4, pg. 6, para. 4, ln. 4-6, "…the third path setting unit 110 inserts the second LUT 107 between the resizing unit 244 and the matrix unit 245 as a lookup table for realizing view assist." The second LUT corresponds to the view assist video style.) with the determined video style template corresponding to a first LUT (Yoshida, Fig. 1, pg. 4, para. 7, ln. 1-4, "…the YUV signals output from the image processing engine A 121 and the image processing engine B 122 are respectively supplied to the second path setting unit 105. The second path setting unit 105 controls the path of the YUV signal supplied from each image processing engine under the control of the controller 130, and the YUV signal is converted into the first LUT 106…"); processing the video by using a logarithm (LOG) curve corresponding to a current photosensitivity ISO of the camera lens, to obtain a LOG video (Yoshida, Fig. 6, pg. 5, para. 1, ln. 1-6, "The input / output characteristics of Log gamma are defined by numerical tables and functions on the camera side. The purpose of the gradation conversion processing by Log gamma is to provide images with a wide dynamic range while maintaining high gradation properties on the premise of post-processing [called post-production processing] on the editor side. Therefore, as shown in FIG. 6, the dark portion is slightly offset to prevent blackout, the intermediate luminance portion maintains the gradation with the minimum number of bits, and the bright portion suppresses overexposure until saturation." This teaches processing the video by using the LOG curve in Fig. 6 corresponding to a current photosensitivity ISO of the camera lens to obtain a LOG video.). Yoshida does not teach obtaining a video shot through a camera lens, the video including a video image of a first exposure frame and a video image of a second exposure frame; snapping, a corresponding image in the video as a snapped image in response to a snapping instruction, wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame; processing the video by using a logarithm (LOG) curve corresponding to a current photosensitivity ISO of the camera lens, to obtain a LOG video; processing the snapped image by using the LOG curve corresponding to the current photosensitivity ISO of the camera lens, to obtain a LOG-based snapped image; processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template; and processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template. However, Wang teaches obtaining a video shot through a camera lens (Wang, [0010], ln. 1-4, "…these imaging assemblies may include one or more optical elements, such as an assembly of one or more lenses [e.g., a lens assembly] that focus incident light onto an imaging sensor disposed at a corresponding imaging plane [e.g., an array of sensing elements formed within a semiconductor substrate]."). Kanbara teaches the video including a video image of a first exposure frame and a video image of a second exposure frame (Kanbara, Fig. 1, pg. 28, para. 7, ln. 1-3, "The camera 1 with image processing device has: a control unit 34 (setting part 34b), which sets the imaging condition of the first area of the imaging unit 32 to be different from the imaging condition of the second area of the imaging unit 32"); wherein a process of snapping the corresponding image in the video of the snapped image in response to the snapping instruction includes fusing a plurality of frames of video images into the snapped image based on a reference frame comprising the video image of the first exposure frame or the video image of the second exposure frame (Kanbara, pg. 25, para. 9, ln. 1-7, "In addition, in the case of using multiple frames of image data with different acquisition time to detect the known mobile vector is also the same. the control unit 34 in the detection area for detecting the motion vector is mixed with the image data obtained by the first imaging condition and the image data obtained by the second imaging condition, for detecting the image data of the second imaging condition in the image data of the detection area of the motion vector; The correction process as described above [Example 1] ~ [Example 3]. Then, the control unit 34 uses the image data after correction processing to detect the motion vector." This teaches the obtaining a video shot through a camera lens comprises: alternately obtaining a video image of a first exposure frame and a video image of a second exposure frame, wherein a time for which the video image of a first exposure frame is exposed is greater than a time for which the video image of a second exposure frame is exposed.). Kang teaches processing the snapped image by using the LOG curve corresponding to the current photosensitivity ISO of the camera lens, to obtain a LOG-based snapped image (Kang, Fig. 5a, [0143], ln. 1-4, "…For example, when the synthesized image is a 14-bit-depth image containing dynamic range information of 14 stops, as shown in FIG. 5[a], an 8-bit-depth log image including dynamic range information of 14 stops can be generated by applying a log profile whose transform function has a log type." This teaches using the LOG curve in Fig. 5a corresponding to the current ISO of the camera lens to obtain a LOG-based image.); processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style and an image is a one-frame video. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.); and processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.). Regarding Claim 11, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 8 as noted above. Wang teaches a resolution of the video shot through the camera lens is equal to a resolution of the snapped image (Wang, [0055], ln. 8-9, "…the resolution for such picture or video is approximately 4000×2000 pixels or approximately 8 MP."). Regarding Claim 13, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 8 as noted above. Kang teaches in a first video processing procedure, the processor is caused to perform a process of processing, by using a logarithm LOG curve, the video shot through the camera lens, to obtain a LOG video, and a process of processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template are performed (Kang, Fig. 2, [0131], ln. 1-3, "In a first mode, a log image generator 212 can store a log image obtained by applying a log profile to the synthesized image data. The log image includes high contrast ratio information by applying a logarithmic function to the image data." Continuing to [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table [LUT] corresponding to the first filter to the synthesized image."); and, in a second video processing procedure, the processor is caused to: process, by using the logarithm LOG curve, the video shot through the camera lens, to obtain a LOG video (Kang, Fig. 5a, [0143], ln. 1-4, "when the synthesized image is a 14-bit-depth image containing dynamic range information of 14 stops, as shown in FIG. 5[a], an 8-bit-depth log image including dynamic range information of 14 stops can be generated by applying a log profile whose transform function has a log type." This teaches using the log curve in Fig. 5a to obtain a log video.); and process the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template (Kang, [0181], ln. 4-5, "The default lookup table may be a lookup table for applying tone mapping to the log image without color grading."); and the processor is further caused to: store the video corresponding to the determined video style template in the first video processing procedure (Kang, Fig. 1A, [0132], ln. 2-3, "Further, the controller 180 can store the generated log image in the storage unit 170."); and preview the video corresponding to the determined video style template in the second video processing procedure (Kang, Fig. 8, [0167], ln. 3, "…the controller 180 can display a log image 910 on a preview screen."). Regarding Claim 16, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 9 as noted above. Wang teaches a resolution of the video shot through the camera lens is equal to a resolution of the snapped image (Wang, [0055], ln. 8-9, "…the resolution for such picture or video is approximately 4000×2000 pixels or approximately 8 MP."). Regarding Claim 18, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 9 as noted above. Kang teaches in a first video processing procedure, the computer is caused to perform a process of processing, by using a logarithm LOG curve, the video shot through the camera lens, to obtain a LOG video, and a process of processing the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template are performed (Kang, Fig. 2, [0131], ln. 1-3, "In a first mode, a log image generator 212 can store a log image obtained by applying a log profile to the synthesized image data. The log image includes high contrast ratio information by applying a logarithmic function to the image data." Continuing to [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table [LUT] corresponding to the first filter to the synthesized image."), and, in a second video processing procedure, the computer is caused to: process, by using the logarithm LOG curve, the video shot through the camera lens, to obtain a LOG video (Kang, Fig. 5a, [0143], ln. 1-4, "when the synthesized image is a 14-bit-depth image containing dynamic range information of 14 stops, as shown in FIG. 5[a], an 8-bit-depth log image including dynamic range information of 14 stops can be generated by applying a log profile whose transform function has a log type." This teaches using the log curve in Fig. 5a to obtain a log video.); and process the LOG video based on the first LUT corresponding to the determined video style template, to obtain a video corresponding to the determined video style template (Kang, Fig. 6, [0148], ln. 1-4, "In addition, when an input of selecting a first filter from among the plurality of filters 611 to 616, the controller 180 can perform tone mapping and color grading with respect to the synthesized image, by applying a first lookup table (LUT) corresponding to the first filter to the synthesized image." In the broadest reasonable interpretation, a filter is a video style and an image is a one-frame video. Therefore, this teaches processing the LOG video based on the first LUT corresponding to the determined video style template to obtain a video corresponding to the determined video style template.); and the computer is further caused to: store the video corresponding to the determined video style template in the first video processing procedure (Kang, Fig. 1A, [0132], ln. 2-3, "Further, the controller 180 can store the generated log image in the storage unit 170."); and preview the video corresponding to the determined video style template in the second video processing procedure (Kang, Fig. 8, [0167], ln. 3, "…the controller 180 can display a log image 910 on a preview screen."). Regarding Claim 20, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 1 as noted above. Kang teaches the first LUT is a three-dimensional LUT (Kang, Fig. 1A, [0177], ln. 2-6, "The controller 180 can apply a 3D lookup table to a log image to output a display image subjected to tone mapping and color grading in a second mode or apply a 3D lookup table to a display image compressed into an SDR image to output a display image subjected to color grading in a third mode."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Kang with those of Yoshida, Wang, Kanbara, and Kang because it is well known in the art to use 3D lookup tables. Regarding Claim 21 Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 8 as noted above. Kang teaches the first LUT is a three-dimensional LUT (Kang, Fig. 1A, [0177], ln. 2-6, "The controller 180 can apply a 3D lookup table to a log image to output a display image subjected to tone mapping and color grading in a second mode or apply a 3D lookup table to a display image compressed into an SDR image to output a display image subjected to color grading in a third mode."). Claims 4, 12, & 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida, Wang, Kanbara, Kang, and Trinh & Gyurasz (WO 2018036784 A1, hereinafter, "Trinh"). Regarding Claim 4, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 1 as noted above. Trinh teaches before a process of processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a snapped image corresponding to the determined video style template, the method further comprises: converting the LOG-based snapped image from 10-bit image data to 8-bit image data (Trinh, pg. 7, para. 4, ln. 1-2, "As a non-linear mapping function, for example, a log function for mapping the 12-bit or 10-bit image to an 8-bit image may be applied."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Trinh with those of Yoshida, Wang, Kanbara, and Kang because it is well known in the art to convert LOG-based snapped images from 10-bit image data to 8-bit image data. Regarding Claim 12, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 8 as noted above. Trinh teaches before a process of processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain a the snapped image corresponding to the determined video style template, the processor is cause to: convert the LOG-based snapped image from 10-bit image data to 8-bit image data (Trinh, pg. 7, para. 4, ln. 1-2, "As a non-linear mapping function, for example, a log function for mapping the 12-bit or 10-bit image to an 8-bit image may be applied."). Regarding Claim 17, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 9 as noted above. Trinh teaches before a process of processing the LOG-based snapped image based on the first LUT corresponding to the determined video style template, to obtain the snapped image corresponding to the determined video style template, the computer is caused to: convert the LOG-based snapped image from 10-bit image data to 8-bit image data (Trinh, pg. 7, para. 4, ln. 1-2, "As a non-linear mapping function, for example, a log function for mapping the 12-bit or 10-bit image to an 8-bit image may be applied."). Claims 6, 14, & 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida, Wang, Kanbara, Kang, and Tanaka (JP 3493453 B2, hereinafter, "Tanaka"). Regarding Claim 6, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 5 as noted above. Tanaka teaches a video resolution in the second video processing procedure is less than a resolution of the snapped image (Tanaka, pg. 8, para. 7, all lines, "In each frame of the image recorded by the digital camera 1, high-resolution image data [1600 × 1200 pixels] compressed in the tag portion and JPEG format and image data for thumbnail display [80 × 60 pixels] are displayed. ] [sic] Is recorded."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Tanaka with those of Yoshida, Wang, Kanbara, and Kang because it is well known in the art to have a video resolution less than a snapped image resolution. Regarding Claim 14, , Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 13 as noted above. Tanaka teaches a video resolution in the second video processing procedure is less than a resolution of the snapped image (Tanaka, pg. 8, para. 7, all lines, "In each frame of the image recorded by the digital camera 1, high-resolution image data [1600 × 1200 pixels] compressed in the tag portion and JPEG format and image data for thumbnail display [80 × 60 pixels] are displayed. ] [sic] Is recorded."). Regarding Claim 19, Yoshida, Wang, Kanbara, and Kang teach the limitations of dependent Claim 18 as noted above. Tanaka teaches a video resolution in the second video processing procedure is less than a resolution of the snapped image (Tanaka, pg. 8, para. 7, all lines, "In each frame of the image recorded by the digital camera 1, high-resolution image data [1600 × 1200 pixels] compressed in the tag portion and JPEG format and image data for thumbnail display [80 × 60 pixels] are displayed. ] [sic] Is recorded."). Allowable Subject Matter Claims 2, 10, & 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding Claim 2, the prior art of record – taken alone or in combination – fails to teach or render obvious the process of snapping the corresponding image in the video as the snapped image in response to the snapping instruction comprises: using the video image of the second exposure frame as the reference frame if a moving object is detected in a current video the video; or using the video image of the first exposure frame as the reference frame if no moving object is detected in the video. Regarding Claim 10, the prior art of record – taken alone or in combination – fails to teach or render obvious the process of snapping the corresponding image in the video as the snapped image in response to the snapping instruction comprises: using the video image of the second exposure frame as the reference frame if a moving object is detected in a current video the video; or using the video image of the first exposure frame as the reference frame if no moving object is detected in the video. Regarding Claim 15, the prior art of record – taken alone or in combination – fails to teach or render obvious the process of snapping the corresponding image in the video as the snapped image in response to the snapping instruction comprises: using the video image of the second exposure frame as the reference frame if a moving object is detected in a current video the video; or using the video image of the first exposure frame as the reference frame if no moving object is detected in the video. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN DANIEL BARRY whose telephone number is (571)270-0432. The examiner can normally be reached M-Th 0730-1630. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 517-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN DANIEL BARRY/Examiner, Art Unit 2638 /LIN YE/Supervisory Patent Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Jul 31, 2023
Application Filed
May 19, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Oct 16, 2025
Final Rejection — §103
Jan 09, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Examiner Interview (Telephonic)
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598391
OPTICAL INFORMATION READING APPARATUS AND OPTICAL INFORMATION READING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12593121
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12593136
BRIGHTNESS DETECTION METHOD AND APPARATUS, CONTROL METHOD AND APPARATUS FOR PHOTOGRAPHIC APPARATUS, AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12581190
IMAGING SYSTEM AND MOVING BODY PROVIDED WITH SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12464253
LIGHT-EMITTING DIODE (LED) FLICKERING MANAGEMENT (LFM) FOR SPATIALLY MULTIPLEXED IMAGE SENSOR
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
96%
Grant Probability
99%
With Interview (+5.0%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month