Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statement(s) submitted on 3/1/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 16 and 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hayaishi (US 20090028390 A1).
Regarding claim 1, Hayaishi teaches A camera device (Figs. 41-43), comprising:
an image sensor (imaging device 508) configured to capture an image (Fig. 41);
an object identifier (CPU 518) configured to identify an object included in the captured image (Fig. 42; S820);
a distance measurement device (CPU 518) configured to determine a distance between the camera device and the identified object based on an occupancy percentage of the identified object in the captured image (Figs. 6, 8; paras. 0102-0108, 0227; S830; “subject distance estimation unit 330 acquires information that is necessary to calculate the subject distance Sd using Equation (3)”, “Sd=(Wwi.times.Wf.times.f)/(Wfi.times.Wx) (3)”); and
a controller (CPU 518) configured to determine an optimal focus location where a reference value is the largest while moving a lens in a focus range corresponding to the determined distance between the camera device and the identified object, the focus range including at least a focus location of the lens (Fig. 42; S850-S860; paras. 0230-0231).
Regarding claim 2, Hayaishi teaches the camera device of claim 1, wherein the reference value comprises contrast data or edge data (paras. 0230-0231).
Regarding claim 16, the method of claim 16 reciting steps corresponding to claim 1 is also rejected for the same reasons above.
Regarding claim 21, claim 21 reciting features corresponding to claim 1 is also rejected for the same reasons above. In addition, Hayaishi teaches A non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor (paras. 0257-0258) to: (corresponding features as claimed in claim 1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3-4 and 17 is/are rejected under 35 U.S.C. 102 as anticipated by Hayaishi (US 20090028390 A1) or, in the alternative, under 35 U.S.C. 103 as obvious over Reddy (US 9927974 B2).
Regarding claim 3, Hayaishi teaches the camera device of claim 1, further comprising a storage configured to store specification information of the camera device, wherein the distance measurement device is further configured to obtain an angle of view (the angle of view .theta. or focal length f) in a vertical direction from the specification information (Figs. 6, 8; paras. 0102-0108);
wherein the distance measurement device is configured to determine the distance (distance Sd) between the camera device and the identified object based on the obtained angle of view in the vertical direction, a ratio of a size of the identified object, and a physical size of the identified object, and wherein the ratio of the size of the identified object corresponds a ratio of a size of at least a portion of the identified object in the vertical direction to a size of the captured image in the vertical direction (Fig. 8; paras. 0102-0108).
Or in the alternative, in the same field of endeavor Reddy teaches
further comprising a storage configured to store specification information of the camera device, wherein the distance measurement device is further configured to obtain an angle of view in a vertical direction from the specification information; wherein the distance measurement device is configured to determine the distance between the camera device and the identified object based on the obtained angle of view in the vertical direction, a ratio of a size of the identified object, and a physical size of the identified object, and wherein the ratio of the size of the identified object corresponds a ratio of a size of at least a portion of the identified object in the vertical direction to a size of the captured image in the vertical direction (Fig. 1A, 4D; col. 7; from the equation EQ(2) in step 463, the distance d can be calculated).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Reddy in Hayaishi to have further comprising a storage configured to store specification information of the camera device, wherein the distance measurement device is further configured to obtain an angle of view in a vertical direction from the specification information; wherein the distance measurement device is configured to determine the distance between the camera device and the identified object based on the obtained angle of view in the vertical direction, a ratio of a size of the identified object, and a physical size of the identified object, and wherein the ratio of the size of the identified object corresponds a ratio of a size of at least a portion of the identified object in the vertical direction to a size of the captured image in the vertical direction for utilizing an alternative object distance calculation configuration allowing distance to be determined with optimized parameters yielding a predicted result.
Regarding claim 4, Hayaishi/the combination teaches the camera device of claim 3, wherein the object comprises a person, and wherein the portion of the object comprises a face of the person (Hayaishi: Figs. 6, 8).
Regarding claim 17, claim 17 reciting features corresponding to claim 3 is also rejected for the same reasons above.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi (US 20090028390 A1) in view of Park et al (US 20200372794 A1).
Regarding claim 5, Hayaishi teaches everything as claimed in claim 3, but fails to teach
wherein the object comprises a vehicle, and wherein the portion of the object comprises a license plate of the vehicle.
However, in the same field of endeavor Park teaches
wherein the object comprises a vehicle, and wherein the portion of the object comprises a license plate of the vehicle (Fig. 2; para. 0056: “distance (D) between vehicle and camera=f×P/(s×p) [Equation 5] (where f is the focal length of the camera, P is the actual size of the license plate, s is the pixel size, and p is the size of the license plate detected in the image)”).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Park in Hayaishi to have wherein the object comprises a vehicle, and wherein the portion of the object comprises a license plate of the vehicle for acquiring focused image data of a license plate of the vehicle for enabling a vehicle-speed measuring system yielding a predicted result.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi (US 20090028390 A1) in view of Kim et al (US 20160094791 A1).
Regarding claim 6, Hayaishi teaches everything as claimed in claim 3, but fails to teach
wherein the controller is further configured to: obtain locus data from the specification information stored in the storage; and determine the focus range based on the determined distance between the camera device and the identified object and based on the locus data, and wherein the locus data corresponds to a focus location determined based on a distance to the object at a specific zoom magnification.
However, in the same field of endeavor Kim teaches
wherein the controller is further configured to: obtain locus data from the specification information stored in the storage; and determine the focus range based on the determined distance between the camera device and the identified object and based on the locus data, and wherein the locus data corresponds to a focus location determined based on a distance to the object at a specific zoom magnification (paras. 0021, 0061-0072, 0101).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Kim in Hayaishi to have wherein the controller is further configured to: obtain locus data from the specification information stored in the storage; and determine the focus range based on the determined distance between the camera device and the identified object and based on the locus data, and wherein the locus data corresponds to a focus location determined based on a distance to the object at a specific zoom magnification for maintaining focus while minimizing a time that elapses for changing zoom power yielding a predicted result.
Claim(s) 7-9, 11 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi (US 20090028390 A1) in view of Gum (US 20130258167 A1).
Regarding claims 7-9 and 11, Hayaishi teaches everything as claimed in claim 1, but fails to teach
Claim 7: The camera device of claim 1, wherein the captured image comprises a plurality of objects, and wherein the identified object is an object selected among the plurality of objects.
Claim 8: The camera device of claim 7, wherein the identified object is selected as an object closest to a center of the captured image among the plurality of objects.
Claim 9: The camera device of claim 7, wherein the identified object is selected as an object comprising a size that is standardized among the plurality of objects.
Claim 11: The camera device of claim 7, wherein the controller is further configured to set a window in the captured image around the identified object, and wherein the controller is configured to determine the optimal focus location in the set window.
However, in the same field of endeavor Gum teaches
Claim 7: The camera device of claim 1, wherein the captured image comprises a plurality of objects, and wherein the identified object is an object selected among the plurality of objects (paras. 0038, 0063, 0074).
Claim 8: The camera device of claim 7, wherein the identified object is selected as an object closest to a center of the captured image among the plurality of objects (paras. 0038, 0063, 0074; “one particular object may be selected for focus at least in part if the object is located closer to the center of the frame than other objects of the same color”).
Claim 9: The camera device of claim 7, wherein the identified object is selected as an object comprising a size that is standardized among the plurality of objects (paras. 0066-0067; “objects smaller than a threshold size may not be considered for focus prioritization. This may avoid some spurious effects that could occur if the imaging device or camera attempted to focus on very small objects in a scene”).
Claim 11: The camera device of claim 7, wherein the controller is further configured to set a window (priority object region) in the captured image around the identified object, and wherein the controller is configured to determine the optimal focus location in the set window (paras. 0057, 0077; “Autofocusing on the higher priority object or objects may include selecting a lens focus position that provides for increased contrast of the moving object within the scene”).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Gum in Hayaishi to have the features of these claims for providing prioritizing autofocusing on a desirable object over multiple objects so that a better focus can be obtained automatically to a desirable object yielding a predicted result.
Regarding claim 18, claim 18 reciting features corresponding to claims 7-8 is also rejected for the same reasons above.
Regarding claim 19, claim 19 reciting features corresponding to claims 7 and 9 is also rejected for the same reasons above.
Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi (US 20090028390 A1) in view of Gum (US 20130258167 A1) as applied to claim 7 or 16 above, and further in view of Huang et al (US 20210192756 A1).
Regarding claim 10, the combination of Hayaishi and Gum teaches everything as claimed in claim 7, but fails to teach
wherein the object identifier is configured to identify the object based on a deep learning-based object detection algorithm, wherein the object identifier is configured to obtain an accuracy of the plurality of objects based on the deep learning-based object detection algorithm, and wherein the identified object is selected as an object having higher accuracy among the plurality of objects.
However, in the same field of endeavor Huang teaches
wherein the object identifier is configured to identify the object based on a deep learning-based object detection algorithm, wherein the object identifier is configured to obtain an accuracy of the plurality of objects based on the deep learning-based object detection algorithm, and wherein the identified object is selected as an object having higher accuracy among the plurality of objects (paras. 0044, 0059).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Huang in the combination to have wherein the object identifier is configured to identify the object based on a deep learning-based object detection algorithm, wherein the object identifier is configured to obtain an accuracy of the plurality of objects based on the deep learning-based object detection algorithm, and wherein the identified object is selected as an object having higher accuracy among the plurality of objects for utilizing improved techniques capable of utilizing the detection accuracy of deep-learning based object detection while reducing computational cost yielding a predicted result.
Regard claim 20, claim 20 reciting features corresponding to claim 10 is also rejected for similar reasons as in claim 10.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi (US 20090028390 A1) in view Huang et al (US 20210192756 A1).
Regard claim 12, Hayaishi teaches everything as claimed in claim 1, but fails to teach
wherein the object identifier is configured to identify the object based on a deep learning-based object detection algorithm, wherein the object identifier is configured to obtain an accuracy of the identified object based on the deep learning-based object detection algorithm, and wherein the focus range is set based on the obtained accuracy.
However, in the same field of endeavor Huang teaches
wherein the object identifier is configured to identify the object based on a deep learning-based object detection algorithm, wherein the object identifier is configured to obtain an accuracy of the identified object based on the deep learning-based object detection algorithm (Huang: paras. 0044, 0059), and wherein the focus range is set based on the obtained accuracy (Hayaishi: the focus range is set based on an identified object [fig. 42]; Huang: the identified object is detected based on the obtained accuracy).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Huang in Hayaishi to have wherein the object identifier is configured to identify the object based on a deep learning-based object detection algorithm, wherein the object identifier is configured to obtain an accuracy of the identified object based on the deep learning-based object detection algorithm, and wherein the focus range is set based on the obtained accuracy for utilizing improved techniques capable of utilizing the detection accuracy of deep-learning based object detection while reducing computational cost yielding a predicted result.
Claim(s) 11 and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi (US 20090028390 A1) in view of Park (US 10182196 B2).
Regarding claim 13, Hayaishi teaches everything as claimed in claim 1, but fails to teach
Claim 13. The camera device of claim 1, wherein the controller is further configured to set a window in the captured image based on a movement of the identified object, and wherein the controller is configured to determine the optimal focus location in the set window.
Claim 14. The camera device of claim 13, wherein the controller is further configured to change a size of the window based on the movement of the identified object with respect to the image sensor.
Claim 15. The camera device of claim 13, wherein the controller is further configured to moves and set the window to a predicted location based on movement of the identified object to another location in the captured image.
However, in the same field of endeavor Park teaches
Claim 13. The camera device of claim 1, wherein the controller is further configured to set a window in the captured image based on a movement of the identified object, and wherein the controller is configured to determine the optimal focus location in the set window (Figs. 11-12; col. 21, line 35 to col. 22, line 7; movement of the subject is tracked and a window/ROI block is enlarged, reduced or moved in response to the movement).
Claim 14. The camera device of claim 13, wherein the controller is further configured to change a size of the window based on the movement of the identified object with respect to the image sensor (Figs. 11-12; col. 21, line 35 to col. 22, line 7).
Claim 15. The camera device of claim 13, wherein the controller is further configured to moves and set the window to a predicted location based on movement of the identified object to another location in the captured image (Figs. 11-12; col. 21, line 35 to col. 22, line 7).
Therefore, it would have been obvious to one of ordinary skill in this art before the effective filing date of the claimed invention (AIA ) to use the teachings as taught by Park in Hayaishi to have features of these claims for allowing a subject to be quickly and accurately tracked increasing the accuracy of the 3A algorithm yielding predicted result.
Regarding claim 11, claim 11 (addressed by the combination of Hayaishi and Gum) reciting features corresponding to claim 13, in the alternative, can also be rejected for the same reason as presented in claim 13 in view of Park combined with the combination of Hayaishi and Gum.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Quan Pham whose telephone number is (571)272-4438. The examiner can normally be reached Mon-Fri 9am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at (571) 272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Quan Pham/Primary Examiner, Art Unit 2637