DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Examiner considered that the instant application meets the merit of the provisional application (63/175,505) effective filing date, which filed on 04/15/2021. Therefore, Examiner conducted searches based on the date for prior art references.
Election/Restrictions
Examiner determined that Applicant’s argument is persuasive. Therefore, claims 1–25 are pending in the instant application.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1–2, 4 and 15 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 10 of copending Application No. 18/545,874 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other, as follows:
Regarding claim 1,
Instant Application (18/526,787)
Co-pending Application (18/545,874)
Claim 1: A method of detecting objects from camera-produced images comprising:
Claim 1: A method of detecting objects from camera-produced images comprising:
generating multiple raw exposure-specific images for a scene;
generating multiple raw exposure-specific images for a scene;
performing for said multiple raw exposure-specific images respective processes of image enhancement to produce respective processed exposure-specific images;
performing for the multiple raw exposure-specific images respective processes of image enhancement to produce respective processed exposure-specific images;
extracting from said processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features;
extracting from the processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features;
fusing constituent exposure-specific sets of features of said superset of features to form a set of fused features;
identifying a set of candidate objects from said set of fused features; and
identifying, using the respective sets of exposure-specific features, exposure-specific sets of candidate objects; and
pruning said set of candidate objects to produce a set of objects within said scene.1
fusing the exposure-specific sets of candidate objects to form a fused set of candidate objects.
Except for the underscored limitation in claim 1, the claims of the instant application are not patentably distinct from the claims of the reference because the differences between them would have been obvious to one of ordinary skill in the art at the time the invention made.
Claim 2 is rejected in the same manner as claim 1.
Regarding claim 4,
Instant Application (18/526,787)
Co-pending Application (18/545,874)
Claim 4:
generating multiple raw exposure-specific images for a scene;
Claim 10:
generating multiple raw exposure-specific images for a scene;
deriving for each raw exposure-specific image a respective multi-level regional illumination distribution for use in computing respective exposure settings;
deriving for each raw exposure-specific image a respective multi-level regional illumination distribution for use in computing respective exposure settings;
performing for said multiple raw exposure-specific images respective processes of image enhancement to produce respective processed exposure-specific images;
performing for the multiple raw exposure-specific images respective processes of image enhancement to produce respective processed exposure-specific images;
extracting from said processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features;
extracting from the processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features;
recognizing a set of candidate objects using said superset of features; and
detecting a set of candidate objects using the superset of features; and
pruning said set of candidate objects to produce a set of objects within said scene.
pruning the set of candidate objects to produce a set of objects within the scene.
Claim 15 is rejected in the same manner as claim 4.
Accordingly, dependent claims 3, 5–14, and 16–25 are rejected.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1–8, 15–21 and 25 are rejected under 35 U.S.C. § 103 as being unpatentable over Nerkar (U.S. 12,175,645 B2) in view of Desai et al. (U.S. 11,017,271 B2).
Regarding claim 1, Nerkar discloses a method of detecting objects from camera-produced images comprising:
generating multiple raw exposure-specific images2 for a scene; (Per Fig. 2A, Nerkar discloses three LDR input images. Nerkar col. 7 lines 42–50. [t]he three images 202a, 202b, 202c on the top are LDR input images which are composited into the single output HDR image 206 at the bottom,)
performing for said multiple raw exposure-specific images respective processes of image enhancement (image enhancement construed as optimization of LDR) to produce respective processed exposure-specific images3; (Per Fig. 5B, Nerkar discloses multiple output image frames 504, 508, and 512 conducting optimization of the LDR images. Id. col. 16 line 61 – col. 17 line 10. [t]he elimination of said one or more data extraction/data processing/histogram generation steps in across compositing iterations would result in significant optimizations in generation of successive output HDR image frames)
extracting from said processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features (constituting a superset of features construed as compositing process). (Per Fig. 5B, Nerkar discloses compositing processes to execute data extraction for each HDR image frame. Id. col. 15 line 21 – col. 16 line 6. [t]he compositing process, instead of having to execute the same data extraction/data processing/histogram generation steps a second time.)
fusing constituent exposure-specific sets of features (fusing constituent exposure-specific sets of features construed as compositing process where input images are mixed to generate HDR images after training) of said superset of features to form a set of fused features (to form a set of fused features construed as bracketing input images to output images that are in a higher exposure level). (Through Figs. 4A–4B, Nerkar discloses a trained model where input images are bracketed to generate composited HDR image frames. Id. col 12 lines 37–60. [a] first output HDR image frame 404 (Output_HDR_Image_Frame_1) is generated (at step 4002) by running a first iteration of a compositing process on a first bracketed input image series 402 (Bracketed_Input_Image_Series_1) comprising a first input image 402a (Input_Image_Low_1) acquired at a relatively low exposure level (in comparison with second input image 402b))
Nerkar fails to specifically disclose identifying a set of candidate objects from said set of fused features; and pruning said set of candidate objects to produce a set of objects within said scene.
In related art, Desai discloses identifying a set of candidate objects from said set of fused features4; and (Per Fig. 1, Desai’s training generation module 112 detects objects in each image by training correlated features in his machine learning model. Desai col. 7 line 65 – col. 8 line 11. The training instance generation module 112 first detects objects within each candidate image and then crops them as training object images.)
pruning said set of candidate objects (pruning said set of candidate objects construed as cropping them out as training objects) to produce a set of objects within said scene. (Per Fig. 1, Desai discloses cropping the detected objects out such that the image reveals the exact objects in the trained model. Id. col. 11 lines 37–49. For an image containing multiple objects, the pipeline first automatically detects objects within the image, and crops them out as training object images (i.e., an object image contains exactly one single object to be labeled).)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Desai into the teachings of Nerkar to prune detected objects extracting features of images. Id. col. 11 lines 24–36.
Regarding claim 2, Nerkar discloses a method of detecting objects from camera-produced images comprising:
generating multiple raw exposure-specific images for a scene; (Per Fig. 2A, Nerkar discloses three LDR input images. Nerkar col. 7 lines 42–50. [t]he three images 202a, 202b, 202c on the top are LDR input images which are composited into the single output HDR image 206 at the bottom,)
performing for said multiple raw exposure-specific images respective processes of image enhancement (image enhancement construed as optimization of LDR) to produce respective processed exposure-specific images; (Per Fig. 5B, Nerkar discloses multiple output image frames 504, 508, and 512 conducting optimization of the LDR images. Id. col. 16 line 61 – col. 17 line 10. [t]he elimination of said one or more data extraction/data processing/histogram generation steps in across compositing iterations would result in significant optimizations in generation of successive output HDR image frames)
extracting from said processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features (constituting a superset of features construed as compositing process). (Per Fig. 5B, Nerkar discloses compositing processes to execute data extraction for each HDR image frame. Id. col. 15 line 21 – col. 16 line 6. [t]he compositing process, instead of having to execute the same data extraction/data processing/histogram generation steps a second time.)
fusing constituent exposure-specific sets of features (fusing constituent exposure-specific sets of features construed as compositing process where input images are mixed to generate HDR images after training) of said superset of features to form a fused set of candidate objects (to form a fused set of candidate objects construed as bracketing input images to output images that are in a higher exposure level). (Through Figs. 4A–4B, Nerkar discloses a trained model where input images are bracketed to generate composited HDR image frames. Id. col 12 lines 37–60. [a] first output HDR image frame 404 (Output_HDR_Image_Frame_1) is generated (at step 4002) by running a first iteration of a compositing process on a first bracketed input image series 402 (Bracketed_Input_Image_Series_1) comprising a first input image 402a (Input_Image_Low_1) acquired at a relatively low exposure level (in comparison with second input image 402b))
Nerkar fails to specifically disclose identifying, using said respective sets of exposure-specific features, exposure-specific sets of candidate objects; and pruning said set of candidate objects to produce a set of objects within said scene.
In related art, Desai discloses identifying a set of candidate objects from said set of fused features, using said respective sets of exposure-specific features, exposure-specific sets of candidate objects; and (Per Fig. 1, Desai’s training generation module 112 detects objects in each image by training correlated features in his machine learning model. Desai col. 7 line 65 – col. 8 line 11. The training instance generation module 112 first detects objects within each candidate image and then crops them as training object images.)
pruning said set of candidate objects (pruning said set of candidate objects construed as cropping them out as training objects) to produce a set of objects within said scene. (Per Fig. 1, Desai discloses cropping the detected objects out such that the image reveals the exact objects in the trained model. Id. col. 11 lines 37–49. For an image containing multiple objects, the pipeline first automatically detects objects within the image, and crops them out as training object images (i.e., an object image contains exactly one single object to be labeled).)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Desai into the teachings of Nerkar to prune detected objects extracting features of images. Id. col. 11 lines 24–36.
Regarding claim 4, Nerkar discloses a method of detecting objects from camera-produced images comprising:
generating multiple raw exposure-specific images for a scene; (Per Fig. 2A, Nerkar discloses three LDR input images. Nerkar col. 7 lines 42–50. [t]he three images 202a, 202b, 202c on the top are LDR input images which are composited into the single output HDR image 206 at the bottom,)
deriving for each raw exposure-specific image a respective multi-level regional illumination distribution for use in computing respective exposure settings; (Per Fig. 2B, Nerkar discloses histogram for each input image to demonstrate a number of pixel illumination values. Id. col. 8 lines 21–38. Each histogram 204a, 204b, 204c corresponding respectively to an input image 202a, 202b, 202c is a statistical representation of the number of pixel illumination values that lie in a certain range of values.)
performing for said multiple raw exposure-specific images respective processes of image enhancement (image enhancement construed as optimization of LDR) to produce respective processed exposure-specific images; (Per Fig. 5B, Nerkar discloses multiple output image frames 504, 508, and 512 conducting optimization of the LDR images. Id. col. 16 line 61 – col. 17 line 10. [t]he elimination of said one or more data extraction/data processing/histogram generation steps in across compositing iterations would result in significant optimizations in generation of successive output HDR image frames)
extracting from said processed exposure-specific images respective sets of exposure-specific features collectively constituting a superset of features (constituting a superset of features construed as compositing process). (Per Fig. 5B, Nerkar discloses compositing processes to execute data extraction for each HDR image frame. Id. col. 15 line 21 – col. 16 line 6. [t]he compositing process, instead of having to execute the same data extraction/data processing/histogram generation steps a second time.)
Nerkar fails to specifically disclose identifying, using said respective sets of exposure-specific features, exposure-specific sets of candidate objects; and pruning said set of candidate objects to produce a set of objects within said scene.
In related art, Desai discloses recognizing a set of candidate objects from said set of fused features, using said respective sets of exposure-specific features, exposure-specific sets of candidate objects; and (Per Fig. 1, Desai’s training generation module 112 detects objects in each image by training correlated features in his machine learning model. Desai col. 7 line 65 – col. 8 line 11. The training instance generation module 112 first detects objects within each candidate image and then crops them as training object images.)
pruning said set of candidate objects (pruning said set of candidate objects construed as cropping them out as training objects) to produce a set of objects within said scene. (Per Fig. 1, Desai discloses cropping the detected objects out such that the image reveals the exact objects in the trained model. Id. col. 11 lines 37–49. For an image containing multiple objects, the pipeline first automatically detects objects within the image, and crops them out as training object images (i.e., an object image contains exactly one single object to be labeled).)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Desai into the teachings of Nerkar to prune detected objects extracting features of images. Id. col. 11 lines 24–36.
Regarding claim 15, Nerkar discloses an apparatus for detecting objects, from camera-produced images of a time-varying scene, comprising:
a hardware master processor coupled to a pool of hardware intermediate processors; (Fig. 6, an HDR frame generator 600)
a sensing-processing device comprising:
a sensor; (Fig. 6, an image sensor interface 608)
a sensor-control device comprising a neural auto-exposure controller, coupled to a light-collection component, configured to: (Fig. 6, a compositing controller 612)
generate a specified number of time-multiplexed exposure-specific raw SDR images; and (Per Fig. 2A, Nerkar discloses three LDR input images. Nerkar col. 7 lines 42–50. [t]he three images 202a, 202b, 202c on the top are LDR input images which are composited into the single output HDR image 206 at the bottom,)
derive for each exposure-specific raw SDR image respective multi-level luminance histograms; (Per Fig. 2B, Nerkar discloses histogram for each input image to demonstrate a number of pixel illumination values. Id. col. 8 lines 21–38. Each histogram 204a, 204b, 204c corresponding respectively to an input image 202a, 202b, 202c is a statistical representation of the number of pixel illumination values that lie in a certain range of values.)
an image-processing device configured to perform predefined image-enhancing procedures for each said raw SDR image to yield multiple exposure-specific processed images; (Per Fig. 5B, Nerkar discloses multiple output image frames 504, 508, and 512 conducting optimization of the LDR images. Id. col. 16 line 61 – col. 17 line 10. [t]he elimination of said one or more data extraction/data processing/histogram generation steps in across compositing iterations would result in significant optimizations in generation of successive output HDR image frames)
a features-extraction device configured to extract from said multiple exposure-specific processed images respective sets of exposure-specific features collectively constituting a superset of features. (Per Fig. 5B, Nerkar discloses compositing processes to execute data extraction for each HDR image frame. Id. col. 15 line 21 – col. 16 line 6. [t]he compositing process, instead of having to execute the same data extraction/data processing/histogram generation steps a second time.)
Nerkar fails to specifically disclose an objects-detection device configured to identify a set of candidate objects using said superset of features; and
a pruning module configured to filter said set of candidate objects to produce a set of pruned objects within said time-varying scene.
In related art, Desai discloses an objects-detection device configured to identify a set of candidate objects using said superset of features; and (Per Fig. 1, Desai’s training generation module 112 detects objects in each image by training correlated features in his machine learning model. Desai col. 7 line 65 – col. 8 line 11. The training instance generation module 112 first detects objects within each candidate image and then crops them as training object images.)
a pruning module configured to filter said set of candidate objects to produce a set of pruned objects within said time-varying scene. (Per Fig. 1, Desai discloses cropping the detected objects out such that the image reveals the exact objects in the trained model. Id. col. 11 lines 37–49. For an image containing multiple objects, the pipeline first automatically detects objects within the image, and crops them out as training object images (i.e., an object image contains exactly one single object to be labeled).)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Desai into the teachings of Nerkar to prune detected objects extracting features of images. Id. col. 11 lines 24–36.
Regarding claim 3, Nerkar as modified by Desai, discloses the method further comprising deriving for each new raw exposure-specific image a respective multi-level regional illumination distribution for use in computing respective exposure settings. (Per Fig. 2B, Nerkar discloses histogram for each input image to demonstrate a number of pixel illumination values. Nerkar col. 8 lines 21–38. Each histogram 204a, 204b, 204c corresponding respectively to an input image 202a, 202b, 202c is a statistical representation of the number of pixel illumination values that lie in a certain range of values.)
Regarding claim 5, Nerkar as modified by Desai, discloses the method further comprising selecting image regions, for use in said deriving, categorized in a predefined number of levels so that each region of a level, other than a last level of said predefined number of levels, encompasses an integer number of regions of each subsequent level. (Per Fig. 3A at a step 3002, Nerkar discloses that a first input image is obtained at a low exposure level; whereas, a second input image is done at a higher exposure level. Nerkar col. 11 lines 9–16. [a] first input image acquired at a relatively low exposure level (in comparison with a second input image) and a second input image acquired at a relatively higher exposure level in comparison with the first input image.)
Regarding claim 6, Nerkar as modified by Desai, discloses said respective processes of image enhancement are performed according to one of:
sequentially using (sequentially using construed as parallel structure of a processing unit) a single image-signal-processor; (Per Fig. 9, Desai’s processing system 900 comprises a graphics processing unit 37 in parallel. Desai col. 24 lines 37–47. [a]nd has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel.)
Regarding claim 7, Nerkar as modified by Desai, discloses the method wherein said recognizing comprises:
fusing constituent exposure-specific sets of features of said superset of features to form a set of fused features; and (Through Figs. 4A–4B, Nerkar discloses a trained model where input images are bracketed to generate composited HDR image frames. Nerkar col 12 lines 37–60. [a] first output HDR image frame 404 (Output_HDR_Image_Frame_1) is generated (at step 4002) by running a first iteration of a compositing process on a first bracketed input image series 402 (Bracketed_Input_Image_Series_1) comprising a first input image 402a (Input_Image_Low_1) acquired at a relatively low exposure level (in comparison with second input image 402b))
identifying a set of candidate objects from said set of fused features. (Per Fig. 1, Desai’s training generation module 112 detects objects in each image by training correlated features in his machine learning model. Desai col. 7 line 65 – col. 8 line 11. The training instance generation module 112 first detects objects within each candidate image and then crops them as training object images.)
Regarding claim 8, it has been rejected in the same manner as claim 7.
Regarding claim 16, Nerkar as modified by Desai, discloses the apparatus wherein:
said hardware master-processor is communicatively coupled to each hardware intermediate processor through one of:
a shared bus; or (Per Fig. 7, Nerkar’s system communicates multiple processors with a bus. Nerkar col. 19 lines 4–35. An interconnection mechanism (not shown) such as a bus, controller, or network, interconnects the components of the computer system 702.)
Regarding claim 17, Nerkar as modified by Desai, discloses the apparatus wherein each of said sensing-processing device, image-processing device, features-extraction device, and objects-detection device is coupled to a respective hard-ware intermediate processor of said pool of hardware intermediate processors, thereby facilitating dissemination of control data through the apparatus. (Per Fig. 9, Desai’s processing system 900 comprises a graphics processing unit 37 in parallel. Desai col. 24 lines 37–47. [a]nd has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel.)
Regarding claim 18, it has been rejected in the same manner as claim 5.
Regarding claim 19, it has been rejected in the same manner as claim 6.
Regarding claim 20, it has been rejected in the same manner as claim 7.
Regarding claim 21, it has been rejected in the same manner as claim 8.
Claims 9–10 are rejected under 35 U.S.C. § 103 as being unpatentable over Nerkar in view of Desai and further in view of He et al. "Bounding box regression with uncertainty for accurate object detection." Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2019.
Regarding claim 9, Nerkar as modified by Desai, discloses the claimed invention, but fails to specifically disclose the method further comprising: determining objectness of each detected object of said fused set of candidate objects; and pruning said fused set of candidate objects according to a non-maximum-suppression criterion.
In related art, He discloses the method further comprising:
determining objectness of each detected object of said fused set of candidate objects; and (Per Fig. 3, He discloses object detectors in his trained model. He p. 2890, 3.1. Bounding Box Parameterization. Based on a two-stage object detector Faster R-CNN or Mask R-CNN [42, 17] shown in Figure 3, we propose to regress the boundaries of a bounding box separately.)
pruning said fused set of candidate objects according to a non-maximum-suppression criterion. (He discloses KL Loss to improve filtering detected objects adjusting ambiguous bounding boxes. Id. p. 2894, 4.2. Accurate Object Detection. With KL Loss, the network can learn to adjust the gradient for ambiguous bounding boxes during training.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of He into the teachings of Nerkar and Desai to provide a novel bounding box regression loss understanding uncertainties of bounding box prediction. Id. p. 2888, 1. Introduction.
Regarding claim 10, it has been rejected in the same manner as claim 9.
Allowable Subject Matter
Claims 11–14 and 22–24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Park (U.S. 9,384,539 B2) discloses a method of processing a digital image.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENEDICT LEE whose telephone number is (571)270-0390. The examiner can normally be reached 10:00-16:00 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R. Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENEDICT E LEE/Examiner, Art Unit 2665
/Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
1 The recited limitation refines the output of the detected objects and represents a conventional filtering operation applied to improve accuracy. Reciting such filtering would have been an obvious and routine modification yielding predictable results, and accordingly does not confer patentable distinction.
2 Examiner construes raw exposure-specific images as input images—i.e., low dynamic range (LDR) images—before a trained model.
3 Examiner construes processed exposure-specific images as output images—i.e., HDR images—after processing them in the machine learning model.
4 Desai discloses fusion technique to combine recognition results of a target. See his col. 12 lines 49–61. [f]usion technique (block 308) that combines the recognition results of the domain constrained machine learning model