DETAILED ACTION
The non-final action is in response to the application filed 18 December 2024.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Claims 1-4 are pending having a filing date of 18 December 2024, and claiming foreign priority to Japanese Patent Application Number JP 2023-216104, filed 21 December 2023.
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP 2023-216104, filed on 21 December 202.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted 18 December 2024 and 30 June 2025, comply with 35 C.F.R 1.97. Accordingly, the IDSs has been considered by the examiner. Initialed copies of the 1449 forms are enclosed herewith. It should be noted that the 2013/0219571 application cited in the IDS, dated 30 June 2025, has not been considered because it is a typographical error. The application cited in the EP application is 2023/0219571 and has been considered.
Drawings
The drawing, filed 18 December 2024, is accepted by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an intersection recognition module configured to recognize and intersection” (see specification [0023] disclosing that as illustrated in FIG. 2A, the ECU 10 includes, as functional elements, an intersection recognition module 100, a left/right turn prediction module 110, a target deceleration rate calculation module 120, a deceleration control module 130, a target vehicle speed changing module 140, and the like. Each of those functional elements 100 to 140 is implemented by the CPU 11 of the ECU 10 reading out a program stored in the ROM 12 into the RAM 13 and executing the read-out program; [0024] disclosing the camera sensor 42 of the external sensor device 40 acquires the traffic lights and road signs installed at the intersection before acquiring the stop line of the intersection. ... the intersection recognition module 100 uses a publicly known method to recognize the position of the intersection (relative position with respect to the vehicle VH) by processing the image data captured by the camera sensor 42);
“a left/right turn prediction module configured to predict whether the vehicle is to turn left or right” (see specification [0023] );
“a deceleration control module configured to perform ... deceleration control” (see specification [0023] ; [0031] The deceleration control module 130 performs deceleration control by controlling operation of the braking device 22); and
“an external information acquisition module configured to acquire information on an oncoming vehicle” (see specification [0011] [0011] The ECU 10 is a central device which performs driving support such as deceleration support. Driving support is a concept including autonomous driving. To the ECU 10, a drive device 20, a steering device 21, a braking device 22, an internal sensor device 30, an external sensor device 40).
Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The intersection recognition module is interpreted as the camera sensor of the external sensor device as disclosed in [0024]. The left right predication module is being interpreted as the ECU as disclosed in [0023]. The deceleration control module is interpreted as the ECU controlling the braking device as disclosed in [0023] and [0031]. The external information acquisition module is interpreted as the ECU and external sensors as disclosed in [0011].
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication Number 2023/0227033 to Sugawara et al. (hereafter Sugawara) in view of U.S. Patent Publication Number 2017/0217430 to Sherony.
As per claim 1, Sugawara discloses [a] driving support device (see at least Sugawara, abstract), comprising:
an intersection recognition module configured to recognize an intersection in front of a vehicle in a traveling direction (see at least Sugawara, abstract, disclosing that a driver assistance device acquires information about vehicle-driving environment ahead of the vehicle to detect an intersection; [0018] disclosing that the driver assistance device can accurately predict whether the vehicle will interfere with a moving object that is crossing a road close to an intersection at which vehicle makes a right- or left-hand turn to head into the road; [0020] disclosing that the driver assistance device includes a camera unit 21; [0021] disclosing that the camera unit 21 acquires information about the vehicle-driving environment ahead of the vehicle M to obtain information on static objects and information on objects exhibiting dynamic behavior; [0026] disclosing that the camera unit 21 is fixed to the upper midsection of the front part of the interior of the vehicle M. The camera unit 21 includes an on-board camera, an image processing unit (IPU) 21c, and the forward vehicle-driving environment recognizing module 21d);
a left/right turn prediction module configured to predict whether the vehicle is to turn left or right at the intersection recognized by the intersection recognition module (see at least Sugawara [0032] disclosing that when detecting an intersection ahead of the vehicle M, the driver assistance control unit 22 determines whether the vehicle M makes a right-hand turn or a left-hand turn (a “right- or left-hand turn” for short). When determining that the vehicle M makes a right- or left-hand turn, the driver assistance control unit 22 acquires information about the environment ahead of the point at which the vehicle M makes a right- or left hand turn);
a deceleration control module configured to perform, when the left/right turn prediction module predicts that the vehicle is to turn left or right at the intersection, deceleration control of decelerating the vehicle to a predetermined target vehicle speed before the vehicle reaches a predetermined target position (see at least Sugawara, [0042] disclosing that the projected course of the vehicle M at the time of a right-hand turn is illustrated in FIG. 4, in which the left front wheel of the vehicle M on the projected course is to pass through points located on the inner side with respect to the midpoint of the intersection. At the point in time when progression to Step S7 occurs during the execution of the program, the vehicle M is about to enter the intersection at reduced speed (at speeds in a range of 10 to 20 Km/h); [0051] disclosing that Step S16, the vehicle M is brought to a halt short of reaching the crosswalk ahead of the point at which the vehicle M makes a right- or left-hand turn <interpreted as a predetermined target position>. Then, progression to Step S17 occurs. In order to cause the vehicle M to halt short of the crosswalk <interpreted as a predetermined target position>, the driver assistance control unit 22 causes the brake controller 31 and the acceleration/deceleration controller 32 to perform control actions on the basis of the vehicle speed and the distance between the vehicle M and the crosswalk); and
an external information acquisition module configured to acquire information on an oncoming vehicle and/or information on a pedestrian and/or information on a traffic light color of the intersection (see at least Sugawara, [0042]; [0027] pedestrians ; [0044] disclosing that in Step S10, the movement vector (the direction of movement and the speed) of the moving object OB is calculated from positional changes of the moving object OB. The positional changes are detected at every arithmetic operation period. Arrows in FIGS. 3 to 5 denote the movement vectors of the pedestrians OBh and the bicycles OBb; [0052] disclosing that in step S17, it is determined whether there is an object recognized as the moving object OB that is crossing or about to cross the road at the crosswalk. The determination is made on the basis of the forward vehicle-driving environment information obtained by the forward vehicle-driving environment recognizing module 21d of the camera unit 21. If there is an object recognized as the moving object OB that is crossing or about to cross the crosswalk, progression to step S18 occurs; [0053] disclosing that in step S18, it is determined whether the moving object OB has passed the front of the vehicle M. The determination is made on the basis of the forward vehicle-driving environment information obtained by the forward vehicle-driving environment recognizing module 21d of the camera unit 21. The brake controller 31 controls the brakes to cause the vehicle M to keep halting until the moving object OB in front is past the vehicle M) ... . But Sugawara does not explicitly teach the following limitation taught by Sherony:
the deceleration control module being configured to set the predetermined target vehicle speed and/or a start timing for starting the deceleration control based on the information acquired by the external information acquisition module (see at least Sherony, abstract; [0024] disclosing that the one or more sensors can be configured to detect, determine, assess, monitor, measure, quantify and/or sense in real-time. As used herein, the term “real-time” means a level of processing responsiveness that a user or system senses as sufficiently immediate for a particular process or determination to be made, or that enables the processor to keep up with some external process. The sensor system 220 and/or the one or more sensors can be operatively connected to the processor(s) 210, the data store(s) 215, and/or other element of the vehicle 200 (including any of the elements shown in FIG. 2); [0072] disclosing that responsive to determining that the oncoming vehicle 450 intends to execute a left turn across the path of the vehicle 200, a driving maneuver for the vehicle 200 can be determined to avoid a collision with the oncoming vehicle 450 or to mitigate the risk of a collision. The determination of a driving maneuver can be performed by one or more elements of the vehicle 200. For instance, such a determination can be performed by the LTAP/OD module(s) 270, the autonomous driving module(s) 260, and/or the processor(s) 210. The driving maneuver can be any suitable driving maneuver. For instance, the driving maneuver can be decelerating or otherwise reducing the speed of the vehicle 200. As an example, the vehicle 200 can reduce its speed to allow the oncoming vehicle 450 to complete the left turn before the vehicle 200 reaches the intersection 430 and/or to allow the vehicle 200 to potentially activate the braking system 242 less sharply if the oncoming vehicle 450 attempts to make a left turn).
Sugawara and Sherony are analogous art to claim 1 because they are in the same filed of technology suitable for deceleration support of a vehicle. Sugawara relates to a driver assistance device to be applied to a vehicle at times of right- and left-hand turns (see at least Sugawara, [0002]). Sherony relates to the operation vehicles having an autonomous operational relative to oncoming objects (see at least Sherony, [0001]).
Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Sugawara, to provide the benefit of setting the predetermined target vehicle speed and/or a start timing for starting the deceleration control based on the information acquired by the external information acquisition module, as disclosed in Sherony, with a reasonable expectation of success. Doing so would provide the benefit of improving the performance of vehicles and the safety of vehicle occupants (see at least Sherony, [0012]).
As per claim 2, the combination of Sugawara and Sherony discloses all of the limitations of claim 1, as shown above. Sherony further discloses the following limitations:
wherein the deceleration control module is configured to execute target vehicle speed change processing of lowering the predetermined target vehicle speed and/or start timing change processing of advancing the start timing when the left/right turn prediction module predicts that the vehicle is to turn left or right toward an oncoming lane (as cited in claim 1, see at least Sherony, [0024]; [0072]) and
the external information acquisition module acquires information on an oncoming vehicle predicted to enter the intersection within a predetermined period of time before and after a timing at which the vehicle is to reach the predetermined target position (as cited in claim 1, see at least Sherony, abstract; [0024] disclosing that the one or more sensors can be configured to detect, determine, assess, monitor, measure, quantify and/or sense in real-time. As used herein, the term “real-time” means a level of processing responsiveness that a user or system senses as sufficiently immediate for a particular process or determination to be made <interpreted as within a predetermined period of time before and after a timing>, or that enables the processor to keep up with some external process. The sensor system 220 and/or the one or more sensors can be operatively connected to the processor(s) 210, the data store(s) 215, and/or other element of the vehicle 200 (including any of the elements shown in FIG. 2); [0072]).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Sugawara and Sherony as applied to claim 1 above, and further in view of U.S. Patent Publication Number 2023/0219571 to Ito et al. (hereafter Ito).
As per claim 3, the combination of Sugawara and Sherony discloses all of the limitations of claim 1, as shown above. Sugawara discloses the following limitation:
when the external information acquisition module acquires information on a pedestrian predicted to travel through a crosswalk in a path of the vehicle which is to turn left or right within a predetermined period of time before and after a timing at which the vehicle is to reach the predetermined target position (see at least Sugawara, [0027] disclosing that the camera unit 21 operates as follows: the cameras 21a and 21b project an image of a predetermined imaging field If (see FIGS. 3 to 5) in front of the vehicle M for recording vehicle-driving environment image information, and the IPU 21c then performs image processing on the vehicle-driving environment image information in a predetermined manner. Examples of the forward vehicle-driving environment information to be acquired include: ... moving objects (e.g., pedestrians and bicycles) that are crossing a road; vehicles ahead of the vehicle M; and oncoming vehicles in the opposite lane; [0032]; [0033]). But, neither Sugawara nor Sherony explicitly teach the following limitation taught in Ito:
wherein the deceleration control module is configured to execute target vehicle speed change processing of lowering the predetermined target vehicle speed and/or start timing change processing of advancing the start timing (see at least Ito, [0076] disclosing that when the distance between the vehicle SV and the intersection C is equal to or shorter than the threshold value distance Dv, and the right/left turn intention determination unit 13 determines that the driver has the right/left turn intention at the time t2, the target deceleration calculation unit 15 calculates the first target deceleration G1 based on the vehicle speed V at this time and the estimated stop line position S1; [0079]).
Sugawara, Sherony and Ito are analogous art to claim 3 because they are in the same filed of technology suitable for deceleration support of a vehicle. Sugawara relates to a driver assistance device to be applied to a vehicle at times of right- and left-hand turns (see at least Sugawara, [0002]). Sherony relates to the operation vehicles having an autonomous operational relative to oncoming objects (see at least Sherony, [0001]). Ito relates to a deceleration assistance device, a vehicle, a deceleration assistance method, and a program (see at least Ito, [0001]).
Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Sugawara, as modified by Sherony, to provide the benefit of executing target vehicle speed change processing of lowering the predetermined target vehicle speed and/or start timing change processing of advancing the start timing when a pedestrian predicted to travel through a crosswalk, as disclosed in Ito, with a reasonable expectation of success. Doing so would provide the benefit of improving the comfort and drivability of the vehicle (see at least Ito, [0003], [0044]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Sugawara and Sherony as applied to claim 1 above, and further in view of U.S. Patent Publication Number 2026/0008482 to Goto et al. (hereafter Goto).
As per claim 4, the combination of Sugawara and Sherony discloses all of the limitations of claim 1, as shown above. But, neither Sugawara nor Sherony explicitly teach the following limitations taught in Goto:
wherein the deceleration control module is configured to execute target vehicle speed change processing of increasing the predetermined target vehicle speed and/or start timing change processing of delaying the start timing when it is predicted (see at least Goto, [0214] disclosing that when the vehicle 1 has already entered the intersection as illustrated in FIG. 35, the allowable risk setter 69 sets the rule deviation allowable risk represented by the expression (4) described above for the rule deviation risk set at the intersection. Accordingly, when the rule deviation risk set at the intersection becomes higher than the rule deviation allowable risk in association with a change in the lighting color of the traffic light 111 from green to yellow and from yellow to red, the allowable risk setter 69 increases the rule deviation allowable risk by increasing the set speed of the vehicle 1. That is, it is possible for a person to determine to quickly come out of the rule deviation state in which the vehicle 1 is located at the intersection even after the lighting color of the traffic light 111 changes to red),
based on information on the traffic light color acquired by the external information acquisition module, that the traffic light color is to change to a color which prohibits the vehicle from entering the intersection within a predetermined period of time before and after a timing at which the vehicle is to reach the predetermined target position (see at least Goto, [0214]).
Sugawara, Sherony and Goto are analogous art to claim 4 because they are in the same filed of technology suitable for deceleration support of a vehicle. Sugawara relates to a driver assistance device to be applied to a vehicle at times of right- and left-hand turns (see at least Sugawara, [0002]). Sherony relates to the operation vehicles having an autonomous operational relative to oncoming objects (see at least Sherony, [0001]). Goto relates to a driver assistance apparatus, a driver assistance processing method, and a recording medium (see at least Goto, [0001]).
Therefore, it would have been prima facie obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device, as disclosed in Sugawara, as modified by Sherony, to provide the benefit of to execute target vehicle speed change processing of increasing the predetermined target vehicle speed and/or start timing change processing of delaying the start timing when it is predicted based on information on the traffic light color acquired by the external information acquisition module, that the traffic light color is to change to a color which prohibits the vehicle from entering the intersection within a predetermined period of time before and after a timing at which the vehicle is to reach the predetermined target position, as disclosed in Goto, with a reasonable expectation of success. Doing so would provide the benefit of executing appropriate driver assistance control reflecting the intensity of a traffic rule and a temporal change in the traffic rule (see at least Goto, [0008)]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent Publication Number 2023/0001918 to Koike et al. (hereafter Koike) see Fig. 2, and [0039] disclosing that when determination is made not to stop at the stop line L, the stop position setter 22d sets a stop at any one of the stop positions P1, P2, and P3 based on the determination result from the intersection situation determiner 22b and the determination result from the vehicle estimator 22c. The stop positions to be set by the stop position setter 22d are not limited to the three stop positions P1, P2, and P3, and may be two stop positions or less or may be four stop positions or more.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK M. BRADY III whose telephone number is (571)272-7458. The examiner can normally be reached Monday - Friday 7:00 am - 4;30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
PATRICK M. BRADY III
Examiner
Art Unit 3665
/PATRICK M BRADY/Examiner, Art Unit 3665
/Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665