DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/02/2026 has been entered.
Status of the Claims
Claims 1-22 are pending with claims 1, 11, 15-16, and 19 being amended.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Peterson et al. (US 20160243988 A1) in view of Kim et al. (US 20180052457 A1).
Concerning claims 1, 11, and 15, Peterson et al. (hereinafter Peterson) teaches one or more processors, comprising circuitry to:
receive one or more sensory inputs from one or more visual sensors attached to a device (figs. 1-6: camera module 20, ¶0028: camera module 20 (such as a sideward and/or rearward facing imaging sensor or camera) may provide an image output for a vehicle vision system, such as a lane departure warning system or object detection system or blind zone alert system or surround view vision system other vehicle vision system or the like, and may utilize aspects of various imaging sensors or imaging array sensors or cameras or the like);
identify, from the one or more sensory inputs, one or more objects obstructing a view of the device (¶0074: The image sensor provides image output as part of an object detection system or blind zone alert system that detect obstacles and/or objects around the environment of the vehicle); and
cause an extensible apparatus upon which the one or more visual sensors are affixed to extend in a linear direction away from a side of the device to a different position relative to the device to obtain a view around the identified one or more objects obstructing the view of the device (¶0035 & ¶0065: The camera is automatically extended, retracted and/or pivoted in response to a triggering event in order to see further around an object or obstacle at or near the vehicle. Figs. 1 & 2 show camera module 20 attached to an extensible apparatus in order to extend in a linear direction away from a side of the device.). Not explicitly taught is us[ing] one or more machine learning models to generate information indicating an adjustment of the one or more visual sensors to optimize visual sensing of the one or more visual sensors based, at least in part, on the received one or more sensory inputs, the identified one or more objects, and environmental conditions.
Kim et al. (hereinafter Kim), in the same field of endeavor, teaches the use of one or more machine learning models to generate information indicating an adjustment of the one or more visual sensors to optimize visual sensing of the one or more visual sensors based, at least in part, on the received one or more sensory inputs, the identified one or more objects, and environmental conditions (fig. 10: operation 1020, ¶0084: a driving situation model or other machine or deep learning may be used in estimating the driving situation based on information from sensors, determined conditions and previous training data; ¶0083: inputs from various sensors (e.g., speed, GPS, etc.), identified objects (e.g., obstacles or obtruding vehicles) and environmental conditions (e.g., weather, road conditions, etc.) may be used in estimating the driving situation. Based on the driving situation, controller 120 determines or sets the appropriate parameter (i.e., optimized visual sensing) associated with the width of the stereo camera). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Kim to the Peterson invention in order to estimate the driving situation of the vehicle. The addition of machine learning models would allow the Peterson invention to be trained or designed based on training data in order to make decisions and/or predictions without explicit instructions.
Concerning claims 2 and 12, Kim teaches the one or more processors of claim 1, further comprising using the one or more processors to determine environmental conditions based at least in part on receiving one or more visual sensor inputs from the one or more visual sensors (¶0083: Also, the controller 120 may predict the road condition based on a result obtained by analyzing an image capturing a current driving road and/or the weather information.).
Concerning claim 3, Peterson further teaches the one or more processors of claim 2, wherein the one or more visual sensors include one or more cameras (figs. 1-6: camera module 20 & ¶0028).
Concerning claim 4, Peterson further teaches the one or more processors of claim 2, wherein the one or more visual sensor inputs include at least one of: one or more images of an environment surrounding the one or more visual sensors (figs. 13A-13B; ¶0035) or one or more video frames of an environment surrounding the one or more visual sensors (figs. 13A-13B; ¶0035).
Concerning claim 5, Kim further teaches the one or more processors of claim 2, wherein using the processor to determine the environmental conditions includes:
inputting the one or more visual sensor inputs to the one or more machine learning models (¶¶0083-0084), and
receiving as output from the one or more machine learning models the environmental conditions (¶¶0083-0084).
Concerning claim 6, Kim further teaches the one or more processors of claim 5, wherein the one or more machine learning models are trained using training data to learn the environmental conditions from visual sensor inputs (¶0084: previous training data and being designed based on such training data (e.g., information from such differing sensors or determined conditions and based on previous training data)).
Concerning claim 7, Kim further teaches the one or more processors of claim 2, wherein the environmental conditions include a state of an environment surrounding the one or more visual sensors (¶0083: road conditions, weather, etc.).
Concerning claim 8, Peterson further teaches the one or more processors of claim 1, wherein adjusting the one or more visual sensors comprises using a position control signal to instruct the one or more visual sensors from a current position to an extended position of a defined amount (¶0046).
Concerning claim 9, Peterson further teaches the one or more processors of claim 1, wherein causing the one or more visual sensors to be adjusted includes causing the one or more visual sensors to change position rotationally (¶0038).
Concerning claim 10, Peterson further teaches the one or more processors of claim 1, wherein causing the one or more visual sensors to be adjusted includes causing the one or more visual sensors to change position linearly (figs. 1-2: retracted and extended positions of camera module 20).
Concerning claim 13, Kim further teaches the non-transitory computer readable medium of claim 11, wherein visual information of the environmental conditions is displayed to a user (¶0059).
Concerning claim 14, Kim further teaches the non-transitory computer readable medium of claim 13, wherein the visual information is output to an automated system that makes a decision based at least in part on the visual information (¶0087).
Concerning claim 16, Peterson further teaches the method of claim 15, wherein causing the extensible apparatus to extend includes changing the one or more visual sensors from a current position to an optimal position and includes causing the one or more visual sensors to change linearly with respect to the current position (figs. 1-2: retracted and extended positions of camera module 20; ¶0046).
Concerning claim 17, Peterson further teaches the method of claim 16, wherein causing the one or more visual sensors to change from the current position to the optimal position includes causing the one or more visual sensors to change rotationally with respect to the current position (¶0038; ¶¶0046-0047).
Concerning claim 18, Kim teaches the method of claim 15, further comprising using the one or more machine learning models, by a vehicle, to determine visual information of environmental conditions surrounding the vehicle (¶0084: previous training data and being designed based on such training data (e.g., information from such differing sensors or determined conditions and based on previous training data)).
Concerning claim 19, Peterson, now incorporating the teachings of Kim, further teaches the method of claim 16, further comprising: using one or more additional machine learning models to adjust the one or more visual sensors from the optimal position to a new optimal position based at least in part on a state of an environment surrounding the one or more visual sensors (Peterson, ¶0065: the extension, retraction and/or pivoting of the camera are reactive to detections at or around the vehicle and are thus updated accordingly (e.g., detecting the blind spot region (see, ¶0005)); Kim, fig. 10: operation 1020, ¶0084: a driving situation model or other machine or deep learning may be used in estimating the driving situation based on information from sensors, determined conditions and previous training data).
Concerning claim 20, Peterson further teaches the method of claim 19, wherein the optimal position is an extended position and the new optimal position is a retracted position (¶0005: a retracted position may be suitable for blind spot detection).
Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Peterson et al. (US 20160243988 A1) in view of Kim et al. (US 20180052457 A1) and Atwater et al. (US 20190340440 A1).
Concerning claim 21, Peterson in view of Kim teaches the system of claim 1. Not explicitly taught is the system, wherein output of the one or more machine learning models indicates how to adjust the one or more visual sensors.
Atwater et al. (hereinafter Atwater), in a similar field of endeavor, teaches using the output of one or more machine learning models that indicate how to adjust the one or more visual sensors (¶¶0074-0075: Machine learning techniques are used to learn preferred camera positions and locations based on the training data. The monitoring system may also learn how to position cameras at certain locations and the camera settings that may be utilized to obtain quality images.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Atwater to the Peterson in view of Kim invention in order to adjust the visual sensors to obtain high quality images (Atwater, ¶0075).
Concerning claim 22, Peterson in view of Kim teaches the method of claim 15. Not explicitly taught is the system, wherein the one or more machine learning models are used to infer the adjustment.
Atwater et al. (hereinafter Atwater), in a similar field of endeavor, teaches using the one or more machine learning models infer the adjustment (¶¶0074-0075: Machine learning techniques are used to learn preferred camera positions and locations based on the training data. The monitoring system may also learn how to position cameras at certain locations and the camera settings that may be utilized to obtain quality images.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Atwater to the Peterson in view of Kim invention in order to adjust the visual sensors to obtain high quality images (Atwater, ¶0075).
Response to Arguments
Applicant’s arguments, see pages 6-8 of the remarks, filed 02/02/2026, with respect to rejection of claims 1-22 under 35 U.S.C. §§ 102 & 103 have been fully considered, but they are moot in view of new grounds of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M ANDERSON II whose telephone number is (571)270-1444. The examiner can normally be reached Monday - Friday 10AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN PENDLETON can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/James M Anderson II/Primary Examiner, Art Unit 2425