Prosecution Insights
Last updated: April 19, 2026
Application No. 18/533,379

INBOUND VIDEO MODIFICATION SYSTEM AND METHOD

Final Rejection §101§103
Filed
Dec 08, 2023
Examiner
COLEMAN, STEPHEN P
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Lenovo (Beijing) Limited
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
737 granted / 877 resolved
+22.0% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
47 currently pending
Career history
924
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 877 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Status The examiner acknowledges the amendment of claims 1-2, 10-11 & 18-19, addition of claims 21-28 and the cancellation of claims 3-6, 8, 12-15, 17 & 20 filed 01/27/2026. RESPONSE TO ARGUMENTS 35 USC 101 Alice Applicant Argument 1 Applicant submits the amended claims are not directed to an abstract idea because they are directed to a particular improvement in the capabilities of a computing device. Specifically, applicant submits amended claims are not directed to an abstract idea via EnFish and McRO. Applicant submits claims are directed to a particular improvement in computing capabilities are not abstract. Applicant cites “video modification system, method and computer program product” that technically process inbound video frames from a remote camera system. After carefully reviewing applicant amendments, 35 USC 101 guidance and claim limitations, examiner respectfully disagrees. In response, examiner submits claims as currently constructed do not improve the functioning of the computer itself. Claims use generic memory/processors to analyze image data, classify attributes, identify a camera type, select tuning parameters, and display an adjusted image. As to EnFish and McRO, these claims deviate from EnFish and McRO as the amended claims do not recite a specific data structure, a specific rule set, or a specific constrained algorithm of the type relied upon in EnFish and McRo. The current claim layout is framed around analyzing, determining, applying and displaying. In view of above arguments, rejection is sufficient and respectfully maintained. Applicant Argument 2 Applicant submits the claims solve a specific computer technology problem of degraded inbound video streams caused by heterogeneous remote camera hardware and network transmission, which manifests as measurable image attributes outside desired operating targets. Applicant further submits amended claims recite a concrete pipeline that improves the computer image rendering subsystem by automatically correcting technical degradations in real time. After carefully reviewing applicant amendments, 35 USC 101 guidance and claim limitations, examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of applicant’s invention, it is noted that the features upon which applicant relies (i.e., network handling mechanism, transmission-path correction mechanism or heterogeneity management architecture.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant Argument 3 Applicant submits amended claim 1 recites specific computer implemented image processing pipeline, including concrete measurement operations, machine learning classifications, database signature comparison, and application of particular tuning parameters to achieve measured attribute values within target values, therefore these steps cannot be performed in the human mind. After carefully reviewing applicant amendments, 35 USC 101 guidance and claim limitations, examiner respectfully disagrees. In response, examiner submits as amended claim language remains viewed as broad evaluation via observing image attributes, compare them to stored profiles, identify the likely camera type, choosing tuning parameters, and move attributes values into target ranges. Examiner also submits above evaluation is classified as a mental process, mathematical evaluation or data analysis framework. Merely implementing an abstract idea on a generic computer does not integrate the exception into a practical application. Examiner submits amendment does not disclose enough specific technical implementation detail. Applicant Argument 4 Applicant submits claims yield concrete technical improvements of reducing noise, increased sharpness, corrected color, corrected distortion, and improved tone scale rendering. In response to applicant's argument that the references fail to show certain features of applicant’s invention, it is noted that the features upon which applicant relies (i.e., reducing noise, increased sharpness, corrected color, corrected distortion, and improved tone scale rendering.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant Argument 5 Applicant submits claims involve specific technical steps of image measurement, statistical computation, neural network, inference, database lookups, convolution and filter operations, tone mapping, geometric warping and target range verification and not merely high level analyzing/determining/applying/displaying. In response to applicant's argument that the references fail to show certain features of applicant’s invention, it is noted that the features upon which applicant relies (i.e., technical steps of image measurement, statistical computation, neural network, inference, database lookups, convolution and filter operations, tone mapping, geometric warping and target range verification.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant Argument 6 Applicant submits claims yield a practical application because they add specific limitations, improve technological field, and are rooted in computer technology. After carefully reviewing applicant amendments, 35 USC 101 guidance and claim limitations, examiner respectfully disagrees. In response, applicant claim of practical application does not yet show a specific machine-based implementation or another meaningful limit beyond generic computerized image analysis and correction. Examiner submits additional elements are still generic memory and one or more processors used to implement the overall process. Applicant Argument 7 Applicant submits claim specifications concrete operations add meaningful technological constraints beyond mere ‘apply it’ instructions. In response to applicant's argument that the references fail to show certain features of applicant’s invention, it is noted that the features upon which applicant relies (i.e., MTF compensation, white balance/color correction, noise reduction, distortion correction, color channel adaptive zoom/ sharpening, ANN training, pixel region sampling, and contrast compensation.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant Argument 8 Applicant submits under DDR holdings, claims are rooted in computer technology and use computer specific mechanism such as neural network inference on pixel arrays, database signature comparison and algorithmic transformations executed by processors. In response, claiming processors that execute algorithm transformations are broadly described and are directed to abstract data analysis and optimization. Merely implementing an abstract idea on a generic computer does not integrate the exception into a practical application. Prior Art Rejection The examiner acknowledges the amendment of claims 1-2, 10-11 & 18-19, addition of claims 21-28 and the cancellation of claims 3-6, 8, 12-15, 17 & 20 filed 01/27/2026. Applicants arguments filed on (01/27/2026) have been fully considered but are deemed moot in view of new grounds of rejection. Due to the variation in claim scope via amendments a new ground of rejection is proper. CLAIM OBJECTIONS Claim 26 is objected to under 37 CFR 1.75(c) as being in improper form because two claim 26’s exists in amendment. See MPEP § 608.01(n). Accordingly, claim 26s will be treated on merits as 26a and 26b. Appropriate action is required. CLAIM REJECTIONS - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 7-8, 10-11, 16-19 & 26(a/b)-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to as ineligible under subject eligibility test. In the Subject Matter Eligibility Test for Products and Processes (Federal Register, Vol. 79, No. 241, dated Tuesday, December 16, 2014, page 74621), The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional device elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. Claims 1, 10 & 18 Step 1 This step inquires “is the claim to a process, machine, manufacture or composition of matter?” Yes, Claim 1 - “Systems” are machines. Claim 10 – “Method” is a process. Claim 18 – “Computer Program Product” is an Article of Manufacture. Step 2A - Prong 1 This step inquires “does the claim recite an abstract idea, law or natural phenomenon”. This claim appears to directed to an abstract idea. The limitation of “analyze inbound image data generated by a remote camera system; performing one or more image processing measurements on the inbound image data, classifying one or more image attributes of the inbound image data, and inputting the inbound image data into a machine learning algorithm, the one or more image attributes including one or more of sharpness, contrast, color, noise, or distortion; determine, based on the analysis, an identity of the remote camera system by determining the one or more image attributes of the inbound image data and comparing the one or more image attributes to a set of signature profiles, each of the signature profiles associated with a different camera system type; apply a set of one or more tuning parameters to the inbound image data to generate adjusted image data, wherein the one or more tuning parameters in the set are selected based on the identity of the remote camera system to cause a respective value of the one or more image attributes of the adjusted image data to be within a respective pre- selected target range; and display the adjusted image data on a display device for viewing of the adjusted image data by a user.”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind (e.g. mathematical concepts, mental processes or certain methods of organizing human activity) but for the recitation of generic computer components. That is, other than reciting “a memory configured to store program instructions; and one or more processors operably connected to the memory, wherein the program instructions are executable by the one or more processors” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “a memory configured to store program instructions; and one or more processors operably connected to the memory, wherein the program instructions are executable by the one or more processors” language, “analyzing, determining, applying, generating, displaying” in the context of this claim encompasses covers performance of the limitation in the mind (e.g. mathematical concepts, mental processes or certain methods of organizing human activity). STEP 2A – PRONG 1 - CONCLUSION If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A - Prong 2 This step inquires “does the claim recite additional elements that integrate the judicial exception into a practical application”. This judicial exception is not integrated into a practical application. In particular, the claim recites two additional element – using a “a memory configured to store program instructions; and one or more processors operably connected to the memory, wherein the program instructions are executable by the one or more processors” to perform “analyzing, determining, applying, generating, displaying” steps. The “a memory configured to store program instructions; and one or more processors operably connected to the memory, wherein the program instructions are executable by the one or more processors” are recited at a high-level of generality (i.e., as a generic processor) “analyze inbound image data generated by a remote camera system; performing one or more image processing measurements on the inbound image data, classifying one or more image attributes of the inbound image data, and inputting the inbound image data into a machine learning algorithm, the one or more image attributes including one or more of sharpness, contrast, color, noise, or distortion; determine, based on the analysis, an identity of the remote camera system by determining the one or more image attributes of the inbound image data and comparing the one or more image attributes to a set of signature profiles, each of the signature profiles associated with a different camera system type; apply a set of one or more tuning parameters to the inbound image data to generate adjusted image data, wherein the one or more tuning parameters in the set are selected based on the identity of the remote camera system to cause a respective value of the one or more image attributes of the adjusted image data to be within a respective pre- selected target range; and display the adjusted image data on a display device for viewing of the adjusted image data by a user.” such that it amounts no more than mere instructions to apply the exception using a generic computer component. STEP 2A – PRONG 2 - CONCLUSION Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B The critical inquiry here is does the claim recite additional elements that amount to “significantly more” than the judicial exception? The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a “a memory configured to store program instructions; and one or more processors operably connected to the memory, wherein the program instructions are executable by the one or more processors” to perform “analyzing, determining, applying, generating, displaying” steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Dependent Claims As to claims 2, 11 & 19, this claim is directed to generic computer components (“one or more processors”), mental process (“determine deficiency and applying tuning”) and insignificant extra-solution activity (“mitigate image quality deficiency”). Thus, this claim does not integrate the abstract idea into a practical application or constitute significantly more than the abstract. As to claims 7 & 16, this claim is directed to generic computer components (“one or more processors”), mental process (“analyzing”) and insignificant extra-solution activity (“data gathering/analysis”). Thus, this claim does not integrate the abstract idea into a practical application or constitute significantly more than the abstract. As to claims 8 & 17, this claim is directed to generic computer components (“one or more processors”), mental process (“evaluate vs. target, adjusting toward target”) and insignificant extra-solution activity (“adding control mechanism”). Thus, this claim does not integrate the abstract idea into a practical application or constitute significantly more than the abstract. As to claim 26a, this claim is directed to mental process (“association, look up, select logic”). Thus, this claim does not integrate the abstract idea into a practical application or constitute significantly more than the abstract. As to claim 26b, this claim is directed to mental process (“prediction/classification”). Thus, this claim does not integrate the abstract idea into a practical application or constitute significantly more than the abstract. As to claim 27, this claim is directed to mental process (“measurement/analysis of image regions”) . Thus, this claim does not integrate the abstract idea into a practical application or constitute significantly more than the abstract. CLAIM REJECTIONS - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 9-11 & 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) As to claims 1, 10 & 18, Swierk discloses a memory configured to store program instructions; and one or more processors (102, Fig. 1 & [0027] discloses a processor) operably connected to the memory (104, Fig. 1 & [0027] discloses a memory), wherein the program instructions are executable by the one or more processors to analyze inbound image data generated by a remote camera system (820, Fig. 8 & [0045] discloses “MPCAPI transmits video frames to a multimedia framework pipeline”; 825, Fig. 8 discloses processing video); to analyze the inbound image data by performing one or more image processing measurements on the inbound image data (955, Fig. 9 & [0050]-[0051] discloses analysis via image statistics and edge/outline signals that feed adjustment selection); to analyze the inbound image data by classifying one or more image attributes of the inbound image data, the one or more image attributes including one or more of sharpness, contrast, color, noise, or distortion (Fig. 3 & [0050]-[0051], [0123]-[0124], [0219], [0221] discloses color blending/matching 382, luminance/brightness blending 383, noise module 392, virtual background blur 388, and image-statistic-based processing of visual characteristics); and to analyze the inbound image data by inputting the inbound image data into a machine learning algorithm (825, Fig. 8; 702-718, Fig. 7 & [0048]-[0049], [0139]-[0142] discloses the ICCSMS trained neural network used at runtime to output optimized adjustments that the pipeline applies). Swierk further discloses applying a set of one or more tuning parameters to the inbound image data to generate adjusted image data (825, Fig. 8; 970/975, Fig. 9 discloses applying optimized lighting/background/facial corrections and additional optimized adjustments) and displaying the adjusted image data on a display device for viewing of the adjusted image data by a user (980, Fig. 9 discloses sending the enhanced video frames to the collaboration application for viewing and display on display device 110). Swierk is silent to determining an identity of the remote camera system by determining the one or more image attributes of the inbound image data and comparing the one or more image attributes to a set of signature profiles, each of the signature profiles associated with a different camera system type. However, Muriello discloses a camera identification module 130 that derives a camera signature from images and/or metadata and stores signatures in a camera store 270, with an image analyzer 260 extracting attributes from image metadata and pixel data, and with the camera identification module matching information extracted from multiple images to determine whether the images are related to a specific camera. See Fig. 2 (modules 130, 260, 270), Fig. 3 (process 300), and [0015], [0026]-[0027]. Muriello further discloses that a camera signature comprises features extracted from an image that characterize the camera, and that newly discovered signatures are compared against existing signatures stored in the camera store 270 to determine whether the camera is already known. It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Swierk’s disclosure to include Muriello’s camera-signature/profile comparison in order to automatically map an inbound remote stream to a known camera profile and thus select device-specific tuning parameters appropriate for the identified camera system type, thereby improving initial correction accuracy across heterogeneous remote camera systems. Swierk in view of Muriello is silent to one or more tuning parameters are selected to cause a respective value of the one or more image attributes of the adjusted image data to be within a respective pre-selected target range. However, Kim discloses target-range closed-loop control, including calculating an average luminance value of an image frame, checking whether the value is within a pre-set range including a prescribed final target value, and, if not, determining a shift step and repeatedly adjusting exposure/gain until the value is within range and substantially at the final target value. See [0008], [0029]-[0031] & Fig. 1. It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello combination to incorporate Kim’s target-range control teaching in order to cause measured image attributes of the adjusted remote-camera image data to fall within defined target quality bands, thereby standardizing corrected output across different source camera systems. As to claims 2, 11 & 19, Swierk in view of Muriello & KIM discloses everything as disclosed in claims 1, 10 & 18. In addition, Swierk discloses wherein the one or more processors are configured to determine the at least one image quality deficiency based on the analysis (945, Fig. 9 discloses low ambient light conditions detected; 950/955/960, Fig. 9 discloses appearance filter, lighting pre-processing, and color-vector-driven processing), and the one or more processors apply the set of one or more tuning parameters to the inbound image data to mitigate the at least one image quality deficiency in the adjusted image data (825, Fig. 8 & 970/975, Fig. 9 discloses video processing based on optimized adjustments, including application of lighting/background/facial corrections and additional optimized adjustments). As to claims 7 & 16, Swierk in view of Muriello & KIM discloses everything as disclosed in claims 1 & 10 respectively. In addition, Swierk discloses wherein the one or more processors are configured to analyze the inbound image data by performing one or more image processing measurements on the inbound image data. (See Claim 1 Rejection) As to claim 9, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1. In addition, Swierk discloses the display device (110, Fig. 1), wherein the one or more processors (102, Fig. 1) and the display device (110, Fig. 1) are operably connected to each other within a single computer device. (See Fig. 1 & corresponding disclosure.) Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of Gouch (U.S. Publication 2006/0215168) As to claim 21, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of tuning parameters includes modulation transfer function (MTF) compensation that is applied to the inbound image data responsive to the sharpness of the one or more image attributes being outside the pre-selected target range, the MTF compensation comprising applying a convolution kernel defined with a frequency response that is a frequency-by-frequency ratio of the inbound image data and the pre-selected target range. However, Gouch discloses wherein the set of tuning parameters includes modulation transfer function (MTF) compensation that is applied to the inbound image data responsive to the sharpness of the one or more image attributes being outside the pre-selected target range, the MTF compensation comprising applying a convolution kernel defined with a frequency response that is a frequency-by-frequency ratio of the inbound image data and the pre-selected target range. ([0013] discloses enhancing sharpness by replacing the image’s “spatial frequency characteristics”. [0016] discloses the modifying step can comprise “deconvolving” the digital data from the PSF of the imaging system and “convolving” with the PSF of the optical microscope; it further discloses, in the frequency domain, multiplying a Fourier transform of the data with the ratio of the Fourier transforms of the two PSFs. [0018] discloses storing the modulation transfer function of the digital imaging system. [0036-0038] discloses in Fourier space, the system uses a Fourier filter and frequency domain multiplication to implement the desired response.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to correct measured sharpness deficiencies using a frequency response based kernel and drive sharpness toward the desired target response. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of KAO (U.S. Publication 2010/0259551) As to claim 22, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of the one or more tuning parameters includes one or both of white balance or color correction adjustments applied to cause the color to be within the pre-selected target range. However, KAO discloses wherein the set of the one or more tuning parameters includes one or both of white balance or color correction adjustments applied to cause the color to be within the pre-selected target range. ([0015] discloses calculating adjusted gray level settings to achieve the correct output and reduce color deviation. [0017] discloses target color coordinates for a target white point. [0019] discloses measuring luminance and color coordinates of the RGB images. [0028] discloses calculating an error between target and estimated color coordinates and checking whether the error satisfies an error requirement.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to adjust color characteristics of the adjusted image data toward a defined target color condition rather than merely applying unspecified color changes. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of Finnila et al. (U.S. Publication 2013/0308006) As to claim 23, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of the one or more tuning parameters includes noise reduction filtering applied at a predetermined strength to achieve a signal-to-noise ratio within the pre-selected target range. However, Finnila discloses wherein the set of the one or more tuning parameters includes noise reduction filtering applied at a predetermined strength to achieve a signal-to-noise ratio within the pre-selected target range. ([0009] discloses digital camera characterization parameters may include “noise filtering” and “sharpening”. [0011] discloses that the common dynamic camera configuration file may be divided into parameter sets with unique identification. [0020-0021] discloses selecting a set of characterization parameters using image statistics.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to apply a selected noise reduction strength responsive to measure image conditions and improve the corrected image noise performance toward the desired quality. As to claim 26a, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of the one or more tuning parameters is a first set of multiple different sets of the one or more tuning parameters, and the memory stores associations between the different camera system types and the multiple different sets of the one or more tuning parameters, and the one or more processors select the first set of the one or more tuning parameters from the multiple different sets based on the identity of the remote camera system. However, Finnila discloses wherein the set of the one or more tuning parameters is a first set of multiple different sets of the one or more tuning parameters, and the memory stores associations between the different camera system types and the multiple different sets of the one or more tuning parameters, and the one or more processors select the first set of the one or more tuning parameters from the multiple different sets based on the identity of the remote camera system. ([0006-0008] discloses selecting a set of digital camera module characterization parameters from a common dynamic camera configuration file, converting them to image signal processing parameters, and tuning image data using those parameters. [0009] discloses characterization parameters may include camera specific items such as black level, lens shading correction, noise filtering, and sharpening. [0011] discloses the configuration file may be divided into “at least two parameter sets” with unique identification and version number, and that identification links the content of a parameter set of logical image signal processing. [0020-0021] discloses restating the method of providing image data including image statistics and selecting a set of characterization parameters using those statistics. ) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to reuse camera type specific ISP tuning data across heterogeneous camera sources without manual retuning. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of Morimura (U.S. Publication 2013/0265468) As to claim 24, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of the one or more tuning parameters includes distortion correction by applying field-adaptive digital zoom processing to correct for a distortion profile of the inbound image data. However, Morimura discloses wherein the set of the one or more tuning parameters includes distortion correction by applying field-adaptive digital zoom processing to correct for a distortion profile of the inbound image data. (Abstract & [0010-0014] discloses electronic zoom and distortion correction processing section perform electronic zooming and distortion correction on a distorted image, and changes the distortion correction factor in accordance ratio.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to correct lens or field distortion while preserving desired image framing and rendering quality. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of KOYAMA et al. (U.S. Publication 2008/0192447) As to claim 25, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of the one or more tuning parameters includes color channel adaptive digital zoom processing to correct lateral color aberrations, and color channel adaptive sharpening to correct longitudinal color aberrations. However, KOYAMA discloses wherein the set of the one or more tuning parameters includes color channel adaptive digital zoom processing to correct lateral color aberrations, and color channel adaptive sharpening to correct longitudinal color aberrations. ([0013] discloses correcting lateral chromatic aberration by radial scaling of red and blue relative to green, using a filter in the image processing pipe. [0020-0022] discloses the correction flow and look up table based scalers.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to correct both transverse and axial color aberration components in different channel of a processed camera image. Claim 26b is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of Sunkavalli et al. (U.S. Publication 2020/0074682) As to claim 26b, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the machine learning algorithm is an artificial neural network trained to predict the camera system type from the inbound image data and one or more image quality deficiencies present in the inbound image data. However, Sunkavalli discloses wherein the machine learning algorithm is an artificial neural network trained to predict the camera system type from the inbound image data and one or more image quality deficiencies present in the inbound image data. (Abstract & [0003-0004] discloses CNN is trained on images with known corresponding camera calibration parameters and establishes correlations between detected image characteristics and camera calibration parameters. See wherein the trained convolutional neural network can be determined one or more calibration parameters for a received image based on detected image characteristics and training, and can output the most likely value with a confidence level. See wherein a parameter scheme is encoded into the CNN to define relationships between detectable image characteristics and extrapolated camera calibration parameters.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to automate camera identification and parameter selection from learned image characteristics correlations rather than relying only on non-neural comparison logic. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of Panetta et al. (U.S. Publication 2015/0243041) As to claim 27, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the image processing measurements include sampling pixel regions across multiple images of the inbound image data to measure the sharpness, the contrast, the color, the noise, and the distortion. However, Panetta discloses wherein the image processing measurements include sampling pixel regions across multiple images of the inbound image data to measure the sharpness, the contrast, the color, the noise, and the distortion. ([0004] discloses colorfulness, sharpness and contrast to allow real time assessments to color images.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to obtain more localized and robust estimates of sharpness, contrast, color, noise, and related image characteristics before selecting corrections. Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Swierk et al. (U.S. Publication 2022/0232189) in view of Muriello et al. (U.S. Publication 2015/0124107) & KIM et al. (U.S. Publication 2010/0134650) as applied in claim 1 above further in view of Unger et al. (U.S. Publication 2021/0272251) As to claim 28, Swierk in view of Muriello & KIM discloses everything as disclosed in claim 1 but is silent to wherein the set of the one or more tuning parameters includes contrast compensation that modifies tone scale to adjust rendering to the pre-selected target range. However, Unger discloses wherein the set of the one or more tuning parameters includes contrast compensation that modifies tone scale to adjust rendering to the pre-selected target range. ([0004] discloses determining a tone curve based on a model of image contrast distortion and tone mapping the input image according to the determined tone curve. [0066-0069] discloses determining the tone-curve based on the model of image contrast distortion and calculating tone-curve values that reduce or minimize contrast distortion. [0077] discloses that the values of the tone curve are calculated so as to reduce or minimize expected contrast distortion.) It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the Swierk in view of Muriello & KIM combination to incorporate Kim’s target-range control teaching in order to modify rendered contrast toward the desired target range while reducing contrast distortion introduced by the rendering process. CONCLUSION Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stephen P Coleman whose telephone number is (571)270-5931. The examiner can normally be reached Monday-Thursday 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Stephen P. Coleman Primary Examiner Art Unit 2675 /STEPHEN P COLEMAN/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Oct 29, 2025
Non-Final Rejection — §101, §103
Jan 27, 2026
Response Filed
Mar 30, 2026
Final Rejection — §101, §103
Apr 01, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601591
DISTANCE MEASURING DEVICE, DISTANCE MEASURING METHOD, PROGRAM, ELECTRONIC APPARATUS, LEARNING MODEL GENERATING METHOD, MANUFACTURING METHOD, AND DEPTH MAP GENERATING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602429
Video and Audio Multimodal Searching System
2y 5m to grant Granted Apr 14, 2026
Patent 12597146
INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591961
MONITORING DEVICE AND MONITORING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586237
DEVICE, COMPUTER PROGRAM AND METHOD
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
96%
With Interview (+11.6%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 877 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month