DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The 13-page drawings have been considered and placed on record in the file.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-6, 9-10, 12-14, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Mcelvain (US 20220256071 A1) in view of Kagawa et al. (US 20220270220 A1), Maymon et al. (US 20180270489 A1), and Cote et al. (US 20120081385 A1).
Regarding Claim 1, Mcelvain teaches "A method comprising: receiving, using a processing device associated with an image signal processing (ISP) pipeline, a target image captured using a high dynamic range (HDR) sensor at a first bit-depth associated with the HDR sensor, the ISP pipeline being associated with a second bit-depth"; (Mcelvain, FIG.2, Abstract, and Paras. 31, 39, and 70, teaches an HDR image sensor and a processor and memory to store instructions that control the processor wherein the HDR image sensor generates HDR images at an initial HDR bit depth, i.e., receiving a target image captured using an HDR sensor at a first bit-depth associated with the HDR sensor using a processing device associated with an ISP pipeline, wherein the output of the HDR image sensor is limited to an output bit depth that is less than the initial HDR bit depth for compatibility with device communication protocols in which the bit depth of the digital image data is 8 or 10 bits and the bit depth of each HDR image is 14 or 16 bits, i.e., ISP pipeline is associated with a second bit-depth).
However, Mcelvain does not explicitly teach "applying one or more dynamic range compression operations to brightness values associated with the target image to obtain compressed brightness values; determining brightness gain values using the brightness values and the compressed brightness values; applying the brightness gain values to individual color channels of respective pixels of the target image to obtain a compressed target image that preserves a color ratio of the compressed target image; and adjusting a color correction matrix (CCM) associated with the ISP pipeline using the compressed target image”.
In an analogous field of endeavor, Kagawa teaches "applying one or more dynamic range compression operations to brightness values associated with the target image to obtain compressed brightness values"; (Kagawa, FIGs 10A-10B and Paras. 45, 83, and 88, teaches the dynamic range compression unit using the dynamic range compression curve to compress the dynamic range of the luminance of the HDR image data and thus generating SDR image data, i.e., apply a dynamic range compression operation to brightness values associated with a target image to obtain compressed brightness values).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain by including the dynamic range compression of brightness values taught by Kagawa. One of ordinary skill in the art would be motivated to combine the references since it reduces the difference in appearance of objects (Kagawa, Abstract, teaches the motivation of combination to be to reduce the difference in appearance of objects due to the influence of luminance contrast).
However, the combination of references of Mcelvain in view of Kagawa does not explicitly teach “determining brightness gain values using the brightness values and the compressed brightness values; applying the brightness gain values to individual color channels of respective pixels of the target image to obtain a compressed target image that preserves a color ratio of the compressed target image; and adjusting a color correction matrix (CCM) associated with the ISP pipeline using the compressed target image”.
In an analogous field of endeavor, Maymon teaches "determining brightness gain values using the brightness values and the compressed brightness values"; (Maymon, Abstract and Para. 47, teaches obtaining a gain value representing the ratio between the luminance after compression and the luminance of the original HDR image, i.e., determine brightness gain values using the brightness values and the compressed brightness values);
"applying the brightness gain values to individual color channels of respective pixels of the target image to obtain a compressed target image that preserves a color ratio of the compressed target image"; (Maymon, Paras. 44 and 47, teaches the gain may be multiplied by each of the R,G,B channels of the HDR image to compress the HDR image and obtain an LDR image so that the image dynamic range is reduced while the local contrast of the original scene is preserved and wherein the present techniques operate on the luminance of the RGB image in the linear domain such that the colors remain unchanged, i.e., gain values are applied to individual color channels of respective pixels of the target image to obtain a compressed target image that preserves the color ratio of the compressed image).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain and Kagawa by including the determination of brightness gain values using the brightness values and compressed brightness values and applying the gain to color channels of the image to obtain a compressed image that preserves color ratio taught by Maymon. One of ordinary skill in the art would be motivated to combine the references since it enables compression of an HDR image so the luminosity range is made visible (Maymon, Abstract and Para. 8, teaches the motivation of combination to be to enable the compression of an HDR image so that an approximation of the extended luminosity range is made visible).
However, the combination of references of Mcelvain in view of Kagawa and Maymon does not explicitly teach "and adjusting a color correction matrix (CCM) associated with the ISP pipeline using the compressed target image".
In an analogous field of endeavor, Cote teaches "and adjusting a color correction matrix (CCM) associated with the ISP pipeline using the compressed target image"; (Cote, Paras. 163, 179, 388, and 610 teaches image processing circuitry providing image processing steps for image compression such as bright areas of the input image being compressed to a smaller range and wherein raw image data is provided to the ISP front-end logic and processed on a pixel-by-pixel basis in a number of formats in which statistical processing may occur at a precision of 8-bits wherein the raw pixel data having a higher bit-depth may be down-sampled to an 8-bit format for statistics purposes in which imaging statistics are collected to determine the color temperature wherein the estimated color temperature is used to determine/adjust coefficients of a color correction matrix, i.e., a color correction matrix associated with the ISP pipeline is adjusted using the compressed image being the input image which may be compressed to a smaller range and which image statistics are computed at a lower bit-depth in order to adjust the CCM).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain, Kagawa, and Maymon wherein the image is an HDR compressed image by including the adjusting of a color correction matrix associated with the ISP using the image taught by Cote. One of ordinary skill in the art would be motivated to combine the references since it improves the appearance of the image (Cote, Para. 7, teaches the motivation of combination to be to improve the appearance of the resulting image).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Regarding Claim 2, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote teaches "The method of claim 1, further comprising: receiving, using the processing device, a subsequent image captured at the first bit-depth using the HDR sensor"; (Mcelvain, Paras. 31, 48, and 70, teaches the HDR image sensor is configured to generate HDR images at an initial HDR bit depth in which subsequent tone-compressed HDR images may be received from the HDR image sensor after the first tone-compressed HDR image is captured wherein the HDR image sensor captures raw image data at a capture bit depth prior to the tone-compression step that compresses the bit depth to less than the capture bit depth, i.e., receiving a subsequent image captured at the first bit-depth using the HDR sensor);
"applying the one or more dynamic range compression operations to the subsequent image to obtain a compressed subsequent image at the second bit-depth"; (Mcelvain, Paras. 38-39, teaches the tone compressor tone-compresses each HDR image to produce tone-compressed HDR image wherein the tone-compressor compresses the bit depth from 14 or 16 bits to 10 bits, i.e., applying the one or more dynamic range compression operations to the subsequent image by applying it to each captured HDR image to obtain a compressed subsequent image at the second bit-depth being the lower 10 bits);
"and performing color correction on the compressed subsequent image using the adjusted CCM"; (Cote, Paras. 388 and 560, teaches the color correction logic is configured to apply color correction to the RGB image data using a color correction matrix wherein the coefficients of the CCM are adjusted, i.e., performing color correction on the image using the adjusted CCM).
The proposed combination as well as the motivation for combining the Mcelvain in view of Kagawa, Maymon, and Cote references presented in the rejection of Claim 1, applies to claim 2. Thus, the method recited in claim 2 is met by Mcelvain in view of Kagawa, Maymon, and Cote.
Regarding Claim 4, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote teaches "The method of claim 1, wherein the second bit-depth is lower than the first bit-depth"; (Mcelvain, FIG.2, Abstract, and Paras. 31, 39, and 70, teaches output of the HDR image sensor is limited to an output bit depth that is less than the initial HDR bit depth for compatibility with device communication protocols in which the bit depth of the digital image data is 8 or 10 bits and the bit depth of each HDR image is 14 or 16 bits, i.e., the second bit-depth being the output bit depth or digital image bit-depth is lower than the first bit-depth being the HDR image bit-depth).
Regarding Claim 5, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote teaches "The method of claim 1, wherein the brightness gain values are scaling factors between the brightness values and the compressed brightness values"; (Maymon, Para. 47, teaches the gain may be multiplied by each of the R,G,B channels of the HDR image to compress the HDR image and obtain an LDR image, i.e., the gain is a scaling factor between the brightness values and the compressed brightness values).
The proposed combination as well as the motivation for combining the Mcelvain in view of Kagawa, Maymon, and Cote references presented in the rejection of Claim 1, applies to claim 5. Thus, the method recited in claim 5 is met by Mcelvain in view of Kagawa, Maymon, and Cote.
Regarding Claim 6, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote teaches "The method of claim 1, wherein the one or more dynamic range compression operations are based on a compression curve associated with the ISP pipeline"; (Kagawa, FIGs 10A-10B and Paras. 1, 45, 83, and 87-88, teaches the an image processing method and apparatus comprising a dynamic range compression unit using the dynamic range compression curve to compress the dynamic range of the luminance of the HDR image data and thus generating SDR image data wherein the curve shape may be set to an S-shape or selected from a plurality of candidates according to contrast intensity for subsequent processing, i.e., apply a dynamic range compression operation based on a compression curve associated with the ISP pipeline).
The proposed combination as well as the motivation for combining the Mcelvain in view of Kagawa, Maymon, and Cote references presented in the rejection of Claim 1, applies to claim 6. Thus, the method recited in claim 6 is met by Mcelvain in view of Kagawa, Maymon, and Cote.
Claim 9 recites a system with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 10 recites a system with elements corresponding to the steps recited in Claim 2. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 12 recites a system with elements corresponding to the steps recited in Claim 4. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 13 recites a system with elements corresponding to the steps recited in Claim 5. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 14 recites a system with elements corresponding to the steps recited in Claim 6. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 18 recites one or more processors with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses one or more processors (for example, see Mcelvain, Paragraph 7).
Claim 19 recites one or more processors with elements corresponding to the steps recited in Claim 2. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, and Cote references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, and Cote references discloses one or more processors (for example, see Mcelvain, Paragraph 7).
Claims 3, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mcelvain in view of Kagawa, Maymon, Cote, and Cote et al. (US 20150296193 A1, hereinafter Cote’193).
Regarding Claim 3, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote does not explicitly teach "The method of claim 2, wherein the adjusted CCM transforms a color space of the compressed subsequent image from a nonlinear color space associated with the high dynamic range sensor to a linear color space".
In an analogous field of endeavor, Cote’193 teaches "The method of claim 2, wherein the adjusted CCM transforms a color space of the compressed subsequent image from a nonlinear color space associated with the high dynamic range sensor to a linear color space"; (Cote'193, Paras. 353 and 412, teaches the CCM may be configured to convert from a camera RGB color space to a linear sRGB calibrated space wherein the raw image data received from the HDR sensors may be nonlinear, i.e., CCM transforms a color space of the image from a nonlinear color space associated with the HDR sensor to a linear color space).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain, Kagawa, Maymon, and Cote wherein the image is a compressed subsequent image with an adjusted CCM by including the CCM transforming a color space of the image from a nonlinear color space associated with the HDR sensor to a linear color space taught by Cote’193. One of ordinary skill in the art would be motivated to combine the references since it more adequately accounts for edges and noise (Cote'193, Para. 6, teaches the motivation of combination to be to more adequately account for the locations and direction of edges within the image and account for existing noise in the image signal).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Claim 11 recites a system with elements corresponding to the steps recited in Claim 3. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 references, presented in rejection of Claim 3, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 20 recites one or more processors with elements corresponding to the steps recited in Claim 3. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 references, presented in rejection of Claim 3, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 references discloses one or more processors (for example, see Mcelvain, Paragraph 7).
Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mcelvain in view of Kagawa, Maymon, Cote, Cote’193, Chun et al. (US 20220164596 A1), and Li (US 20090147098 A1).
Regarding Claim 7, the combination of references of Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 teaches "The method of claim 1, wherein the adjusting the CCM associated with the ISP pipeline using the compressed target image comprises: converting the compressed target image from a color space associated with the HDR sensor to a perceptually uniform color space"; (Cote'193, Paras. 353, 401, and 411-412, teaches determining/adjusting coefficients of a color correction matrix based on imaging statistics to determine color temperature wherein statistics color space conversion logic is allowed to replicate the color processing of the RGB processing logic in the ISP pipe processing logic by applying a color correction matrix for a given color temperature which may also provide for the conversion of the Bayer RGB values to a more color consistent color space such as CIELab and wherein raw image data is received from HDR sensors which is nonlinear, i.e., adjusting CCM associated with the ISP pipeline comprises converting the image from a color space associated with the HDR sensor being the Bayer RGB space to a perceptually uniform color space being the CIELab color space).
The proposed combination as well as the motivation for combining the Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 references presented in the rejection of Claim 3, applies to claim 7.
However, the combination of references of Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 does not explicitly teach "analyzing color differences between known color values associated with the target image and color values associated with the compressed target image in the perceptually uniform color space; and determining CCM coefficients based on the color differences”.
In an analogous field of endeavor, Chun teaches "analyzing color differences between known color values associated with the target image and color values associated with the compressed target image in the perceptually uniform color space"; (Chun, Para. 18, teaches a color similarity analysis step including measuring the color similarity by converting the images represented as RGB color space data into Lab color space data and then calculating a color difference between pixels matching between the images in the Lab color space data, i.e., analyzing color difference between known color values of an image and another image in the perceptually uniform color space being LAB color space).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain in view of Kagawa, Maymon, Cote, and Cote’193 wherein one image is the target image and the other image is the compressed image by including the analysis of color differences between known color values associated with the two images in the perceptually uniform color space taught by Chun. One of ordinary skill in the art would be motivated to combine the references since it evaluates the performance of the image (Chun, Para. 9, teaches the motivation of combination to be to evaluate the performance of the pattern image in the environment image).
However, the combination of references of Mcelvain in view of Kagawa, Maymon, Cote, Cote’193, and Chun does not explicitly teach "and determining CCM coefficients based on the color differences".
In an analogous field of endeavor, Li teaches "and determining CCM coefficients based on the color differences"; (Li, Para. 32, teaches the color coefficients of a color correction matrix are adjusted to minimize color differences, i.e., determining CCM coefficients based on color differences).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain in view of Kagawa, Maymon, Cote, Cote’193, and Chun wherein color differences are determined between the two images by including the determination of CCM coefficients based on the color differences taught by Li. One of ordinary skill in the art would be motivated to combine the references since it improves the accuracy of color reproduction (Li, Para. 7, teaches the motivation of combination to be to improve the accuracy of color reproduction).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Claim 15 recites a system with elements corresponding to the steps recited in Claim 7. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, Cote, Cote’193, Chun, and Li references, presented in rejection of Claim 7, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, Cote, Cote’193, Chun, and Li references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mcelvain in view of Kagawa, Maymon, Cote, and Honjo (US 10430457 B2).
Regarding Claim 8, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote does not explicitly teach "The method of claim 1, wherein the target image is a color chart comprising a plurality of regions of known color values".
In an analogous field of endeavor, Honjo teaches "The method of claim 1, wherein the target image is a color chart comprising a plurality of regions of known color values"; (Honjo, Claim 1, teaches a captured image comprising a surface on which a color chart is disposed and extracting color values from colored regions classified as an identical color in the color chart that is contained in the captured image, i.e., image is of a color chart comprising a plurality of regions of known color values).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain in view of Kagawa, Maymon, and Cote by including the image comprising a color chart of a plurality of regions of known color values taught by Honjo. One of ordinary skill in the art would be motivated to combine the references since it enables search of an object using color (Honjo, Col. 1 lines 57-61, teaches the motivation of combination to be to enable search of an object whose image has been captured under different environments by using the desired color).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Claim 16 recites a system with elements corresponding to the steps recited in Claim 8. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Mcelvain in view of Kagawa, Maymon, Cote, and Honjo references, presented in rejection of Claim 8, apply to this claim. Finally, the combination of the Mcelvain in view of Kagawa, Maymon, Cote, and Honjo references discloses an HDR sensor and processing device (for example, see Mcelvain, Paragraph 29).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Mcelvain in view of Kagawa, Maymon, Cote, and Kiser et al. (US 20180048801 A1).
Regarding Claim 17, the combination of references of Mcelvain in view of Kagawa, Maymon, and Cote does not explicitly teach "The system of claim 9, wherein the system is comprised of at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing one or more simulation operations; a system for performing one or more digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing one or more deep learning operations; a system for presenting at least one of augmented reality content, virtual reality content, or mixed reality content; a system for hosting one or more real-time streaming applications; a system implemented using an edge device; a system implemented using a robot; a system for performing one or more conversational AI operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system for performing one or more generative AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources".
In an analogous field of endeavor, Kiser teaches "The system of claim 9, wherein the system is comprised of at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing one or more simulation operations; a system for performing one or more digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing one or more deep learning operations; a system for presenting at least one of augmented reality content, virtual reality content, or mixed reality content; a system for hosting one or more real-time streaming applications; a system implemented using an edge device; a system implemented using a robot; a system for performing one or more conversational AI operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system for performing one or more generative AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources"; (Kiser, Paras. 138-140, teaches HDR compression wherein the processing system determines an appearance of an item in an environment of the vehicle based on the HDR video and causes the control system of the autonomous vehicle to make a change in the operation of the vehicle based on the item's appearance, i.e., system comprises a control system for an autonomous machine).
It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Mcelvain, Kagawa, Maymon, and Cote by including the system comprising a control system for an autonomous machine taught by Kiser. One of ordinary skill in the art would be motivated to combine the references since it makes the operation of the vehicle safer (Kiser, Para. 5, teaches the motivation of combination to be to make the operation of autonomous vehicles safer).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW STEVEN BUDISALICH whose telephone number is (703)756-5568. The examiner can normally be reached Monday - Friday 8:30am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW S BUDISALICH/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662