Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/12/2024 was filed after the mailing date of the non-final rejection on 01/20/2026. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Yan et al (US 20230401837 A1, hereinafter, "Yan") in view of Toyoda et al (US 20190052791 A1, hereinafter, "Toyoda").
Regarding Claim 1, Yan teaches a modification method of an image brightness of a photographing system, wherein the photographing system comprises an image capturing device, a moving object detection device, and a processing device (Yan, [0030], ln. 5-6, "…an image capture apparatus such as a camera."); the image capturing device has an image capturing range (Yan, [0032], ln. 1-2, "…the camera and the LIDAR sensor, may have overlapping fields of view…"); the moving object detection device has a detection range (Yan, [0032], ln. 1-2, "…the camera and the LIDAR sensor, may have overlapping fields of view…"); the image capturing range partially overlaps with the detection range (Yan, [0032], ln. 1-2, "…the camera and the LIDAR sensor, may have overlapping fields of view…"); the modification method of the image brightness of the photographing system comprises the following steps: step A: providing a corresponding relationship data, wherein the corresponding relationship data has a plurality of predetermined location data corresponding to a plurality of predetermined locations in the detection range of the moving object detection device (Yan, Fig. 3A, [0045], ln 3-5, "For example, each point in the point cloud collected by the LIDAR sensor has a set of coordinates in a local coordinate system [i.e., a coordinate system established with the vehicle 100 as a reference object]."); each of the plurality of predetermined location data comprises a predetermined image coordinate area corresponding to each of the plurality of predetermined locations (Yan, Fig. 3A, [0045], ln 4-5, "For example, each point in the point cloud collected by the LIDAR sensor has a set of coordinates in a local coordinate system [i.e., a coordinate system established with the vehicle 100 as a reference object]."); step B: detecting a location data of a moving object in the detection range by the moving object detection device (Yan, Fig. 1, [0047], ln. 4-7, "For these dynamic object's associated frames, the computing device 120 may generate an original representation of the dynamic object [e.g., an original bounding box] according to the points associated with the dynamic object in each frame…"); step C: obtaining the predetermined image coordinate area of one of the plurality of predetermined location data, which corresponds to the location data, from the corresponding relationship data by the processing device according to the location data (Yan, Fig. 1, [0047], ln. 4-7, "For these dynamic object's associated frames, the computing device 120 may generate an original representation of the dynamic object [e.g., an original bounding box] according to the points associated with the dynamic object in each frame…"); step D: controlling the image capturing device by the processing device for obtaining a first image captured by the image capturing device (Yan, Figs. 1, 3A, & 3B, [0046], ln. 1-4, "Referring to FIGS. 3A and 3B in conjunction with FIG. 1, while the vehicle 100 is running, the computing system 150 can send a trigger signal simultaneously to the sensors of the sensor system 144 [e.g., the camera 304 and the LIDAR sensor 306], triggering the camera 304 and the LIDAR sensor 306 simultaneously or almost simultaneously to acquire the image and the point cloud."). Yan does not teach step E: calculating a first global exposure parameter of a whole of the first image and calculating a first local exposure parameter of a first local image, which corresponds to the predetermined image coordinate area, of the first image by the processing device, controlling the image capturing device by the processing device for obtaining another image captured by the image capturing device, wherein a whole of the another image has a global exposure parameter; a local image of the another image corresponding to the predetermined image coordinate area has a local exposure parameter, wherein the local exposure parameter is closer to an ideal exposure parameter than the first local exposure parameter, or Step G: outputting the another image as a frame of a video by the processing device. However, Toyoda teaches step E: calculating a first global exposure parameter of a whole of the first image and calculating a first local exposure parameter of a first local image, which corresponds to the predetermined image coordinate area, of the first image by the processing device (Toyoda, [0399], ln. 4-7, "…image data of 'photographing condition A' and the image data of 'photographing condition C' alternately generated after modifying the photographic condition is 'image data in the image data acquisition while modifying the photographic condition based on the region-specific correction map'."); step F: controlling the image capturing device by the processing device for obtaining another image captured by the image capturing device, wherein a whole of the another image has a global exposure parameter; a local image of the another image corresponding to the predetermined image coordinate area has a local exposure parameter, wherein the local exposure parameter is closer to an ideal exposure parameter than the first local exposure parameter (Toyoda, [402], ln. 1-4, "The photographic condition A is exposure with a proper exposure adjusted to the background, the photographic condition B is exposure lower than the proper exposure, and the photographic condition C is exposure adjusted to a subject, e.g., exposure closer to the proper exposure than that of the photographic condition B."); and Step G: outputting the another image as a frame of a video by the processing device (Toyoda, Fig. 1, [0046], ln. 1, "The imaging unit 130 is configured to sequentially generate and output image data."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Toyoda with those of Yan because it is well known in the art to calculate a first global exposure image, calculate a local exposure parameter of said image corresponding to a predetermined coordinate area, capture another image with a global exposure parameter in which the local exposure parameter is closer to ideal than the first exposure parameter, and output the second image as a frame of video. This is quite common in WDR/HDR technology.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Toyoda and Schrama et al (US 20230267629 A1, hereinafter, "Schrama").
Regarding Claim 5, Yan and Toyoda teach the limitations of dependent Claim 1 as noted above. Yan and Toyoda teach steps A through G as noted above. Schrama teaches step H: steps A through G are repeatedly executed to form the video having a plurality of frames (Schrama, [0047], ln.1-4, "While discussions of the method 200 and other discussions provided herein refer to processing of images, these discussions are equally applicable to processing of video signals that may be acquired if the camera 106 is a video recording device that generates a video that includes a plurality of consecutive frames."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Schrama with those of Yan and Toyoda because it is well known in the art to repeat the process used to create a single image in order to create a video having a plurality of frames.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Toyoda, Schrama, and Fan et al (US 20160132120 A1, hereinafter, "Fan").
Regarding Claim 6, Yan, Toyoda, and Schrama teach the limitations of dependent Claim 5 as noted above. Fan teaches when the moving object detection device does not detect the moving object in the detection range, step H is terminated (Fan, Fig. 2, [000021], ln. 10-13, "The processor 12 can start to record the image into the storage module 16 when the movement of the image is detected, and can stop recording the image when the movement of the image is not detected after a predefined period."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Fan with those of Yan, Toyoda, and Schrama because it is well known in the art to stop a video recording of a moving object when object movement is not detected for a predetermined amount of time.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Toyoda and Fuchikami et al (US 8350990 B2, hereinafter, "Fuchikami").
Regarding Claim 7, Yan and Toyoda teach the limitations of dependent Claim 1 as noted above. Fuchikami teaches the photographing system comprises a light source module (Fuchikami, Fig. 1, [0023], ln. 6, "…the light source 10…"); in step B, the location data comprises a detection distance (Fuchikami, Fig. 1, [0023], ln. 3-4, "…also the distance between the first user 101 and the liquid crystal layer 30 from the picture of the first user 101 shot by the camera 41…"); the modification method of the image brightness of the photographing system comprises a controlling step of a light source before step D: controlling a luminous intensity of the light source module by the processing device according to the detection distance, so that the luminous intensity of the light source module is inversely proportional to the detection distance (Fuchikami, Fig. 1, [0023], ln. 7-9, "…the light emission controller 60 reduces the light intensity of the light source 10 when the first user 101 is near to the liquid crystal layer 30."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Fuchikami with those of Yan and Toyoda because it is well known in the art to use a light source with a luminous intensity that is inversely proportional to the distance between the camera and the subject.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Toyoda, Fuchikami, and Fish et al (US 20110012746 A1, hereinafter, "Fish").
Regarding Claim 8, Yan, Toyoda, and Fuchikami teach the limitations of dependent Claim 7 as noted above. Fish teaches the photographing system comprises an ambient light sensor (Fish, Fig. 1, [0069], ln. 1, "…includes the ambient light sensor 108…"); the modification method of the image brightness of the photographing system comprises the following steps before step D: detecting an ambient illuminance by the ambient light sensor (Fish, Fig. 1, [0069], ln. 3-4, "…as a function of the detected ambient light by the ambient light sensor 108 being below a threshold value."); when the ambient illuminance is less than a predetermined illuminance, the processing device controls the light source module to activate and the controlling step of the light source is executed (Fish, Fig. 1, [0069], ln. 2-4, "…control the light source 104 to substantially constantly emit light as a function of the detected ambient light by the ambient light sensor 108 being below a threshold value."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Fish with those of Yan, Toyoda, and Fuchikami because it is well known in the art to use an ambient light sensor to determine whether or not ambient light is above a certain threshold, and use it to trigger a light source when ambient light is below said threshold.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Toyoda and Garcia & Zellweger (US 20230196767 A1, hereinafter, "Garcia").
Regarding Claim 9, Yan and Toyoda teach the limitations of dependent Claim 1 as noted above. Garcia teaches each of the plurality of the predetermined location data of the corresponding relationship data comprises a predetermined distance and a predetermined angle (Garcia, [0026], ln. 1-2, "According to at least one embodiment, the temporally sequential detection instruction specifies to the user a detection distance and/or a detection angle relative to the usage object."); in step B, the location data comprises a detection distance and a detection angle (Garcia, [0026], ln. 1-2, "According to at least one embodiment, the temporally sequential detection instruction specifies to the user a detection distance and/or a detection angle relative to the usage object."). Garcia does not teach the processing device selects one of the plurality of predetermined location data, which comprises the predetermined distance corresponding to the detection distance and the predetermined angle corresponding to the detection angle, from the corresponding relationship data and then obtains the predetermined image coordinate area corresponding to the predetermined location data according to the predetermined location data being selected. However, Toyoda teaches the processing device selects one of the plurality of predetermined location data, which comprises the predetermined distance corresponding to the detection distance and the predetermined angle corresponding to the detection angle, from the corresponding relationship data and then obtains the predetermined image coordinate area corresponding to the predetermined location data according to the predetermined location data being selected (Toyoda, [0399], ln. 4-7, "…the image data of 'photographing condition A' and the image data of 'photographing condition C' alternately generated after modifying the photographic condition is 'image data in the image data acquisition while modifying the photographic condition based on the region-specific correction map'."). It would have been obvious to a person having ordinary skill in the art at the time of the invention to combine the teachings of Garcia with those of Yan and Toyoda because it is well known in the art to utilize detection angles and distances for predetermined local areas relative to coordinate locations.
Allowable Subject Matter
Claims 2-4 & 10-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN DANIEL BARRY whose telephone number is (571)270-0432. The examiner can normally be reached M-Th 0730-1630.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 517-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN DANIEL BARRY/Examiner, Art Unit 2638
/LIN YE/Supervisory Patent Examiner, Art Unit 2638