Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/16/2026 has been entered.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification (MPEP 608.01, ¶6.31).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Communication device
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
Applicant’s amendments correct the identified antecedent basis issue, and thus the related rejections are withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-6 and 8-9 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Marsh (US20210311183A1); in view of Tanaka (US20210197666A1) and Montemerlo (US8718861B1).
Regarding claim 1, Marsh teaches;
A vehicle control apparatus (taught as a vehicle apparatus, element 405, paragraph 0137) comprising:
a sensor, mounted on a host vehicle (taught as a set of sensors for a radar-camera setup, such as element 120B), configured to detect an object that is present outside of said host vehicle, and obtain host-vehicle-object-information on said detected object (taught as the radar-camera sensor detecting objects relative to the [host] vehicle, paragraph 0127);
a receiving device (taught as a communication interface, element 220, for communicating with vehicles, paragraph 0130) configured to receive external object information on an object detected by an external device that is located outside of said host vehicle from said external device (taught as the communication interfaces requesting sensor data, paragraph 0140, and receiving a response with information based on sensor data collected outside of the requesting vehicle, paragraph 0141); and
a control unit capable of performing a predetermined control based on at least said host-vehicle-object-information (taught as using sensor module for automotive safety applications, including cruise control, collision warning, collision mitigation, lane departure warning and the like, paragraph 0127);
wherein, said control unit is configured to:
perform said control based on said host-vehicle-object-information when a blocking condition is not satisfied (taught as using sensor module for automotive safety applications, including cruise control, collision warning, collision mitigation, lane departure warning and the like, paragraph 0127),
said blocking condition being a condition to be satisfied when a sensor object that is an object detected by said sensor is present at a position where said sensor object blocks a full scan of said sensor (taught as detecting an occluded region in the sensor monitoring region, paragraph 0139); and
perform said control based on said host-vehicle-object-information and said external object information when said blocking condition is satisfied and said external object information is obtained (taught as supplementing the FOV with the received sensor data, paragraph 0168),
and when said blocking condition is satisfied and said external object information cannot be obtained, perform said control based on said host-vehicle-object-information, (indicated in the example that there are occasions that, for example, VRU [vulnerable road user] B, indicates lack of relevant sensor data, paragraph 0166, and that any sent sensor data is filtered to avoid delivering irrelevant sensor data, paragraph 0167; thus, in the case where all external object information sources are ‘irrelevant’, the finalized control would be performed without external object information).
While “perform said control” is not explicitly taught, Marsh does heavily indicate such a step in the recitation of automotive safety applications, including cruise control, collision warning, collision mitigation, lane departure warning and the like (paragraph 0127) in fulfilling levels of automation (paragraph 0121), and such that recipients of the sensor-sharing message using information to determine maneuvers (paragraph 0169). One of ordinary skill in the art would readily recognize the control of a vehicle autonomously based on the sensor data methodology described.
However, Marsh does not explicitly teach; wherein the blocking condition is satisfied when a distance between said host vehicle and said sensor object is equal to or shorter than a predetermined distance, and when said blocking condition is satisfied and said external object information cannot be obtained, notify a driver that said blocking condition is satisfied (emphasis added).
Tanaka teaches; wherein the blocking condition is satisfied when a distance between said host vehicle and said sensor object is equal to or shorter than a predetermined distance (taught as determining a relative position of an obstruction relative to the vehicle, and when the relative position falls within a detection target region, fulfilling a condition to provide further control/guidance, such as by presenting an image, paragraph 0074).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a distance threshold as taught by Tanaka in the system taught by Marsh in order to further assist user guidance. Such guidance allows for distinguishing important information, which, as suggested by Tanaka, reduces the likelihood of a traffic accident (paragraph 0034).
However, Tanaka does not explicitly teach; when said blocking condition is satisfied and said external object information cannot be obtained, notify a driver that said blocking condition is satisfied.
Montemerlo teaches; when said blocking condition is satisfied and said external object information cannot be obtained, notify a driver that said blocking condition is satisfied (taught as generating an alert to a driver if deviation values are not within the relevant threshold range, column 16 lines 19-25).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to warn a driver of a malfunctioning sensor as taught by Montemerlo in the system taught by Marsh in order to improve safety. When detecting an occlusion of an FOV, there is a chance that it can be due to sensor malfunctions, which result in deviating sensor values. By warning a driver of the deviation, one can prep a driver about the potential need to take over full control of the vehicle, as suggested by Montemerlo (column 16 lines 22-25).
Regarding claim 4, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 1 (see claim 1 rejection). Marsh further teaches; wherein, said control unit is configured to:
when said blocking condition is not satisfied, perform said control based on a collision possibility of a collision between said host vehicle and an object recognized based on said host-vehicle-object-information (taught as collision warning or collision mitigation/avoidance via autonomous braking using sensor information, paragraph 0127); and
when said blocking condition is satisfied, perform said control based on a collision possibility of a collision between said host vehicle and an object recognized based on said host-vehicle-object-information and said external object information (taught as collision mitigation/avoidance via autonomous braking using sensor information, paragraph 0127, which includes the supplemental received sensor data to fill in gaps of the FOV, paragraph 0141).
Regarding claim 5, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 4 (see claim 4 rejection). Marsh further teaches; wherein, said sensor is configured to detect an object that is present in a side area of said host vehicle as said sensor object (exemplified in Fig 7, Fig 9 as an occlusion of the FOV), and to obtain said host-vehicle-object-information on said sensor object (taught as detecting and identifying objects using sensors such as radar and camera, paragraph 0122-0123); and
said control unit is configured to:
in a case where said blocking condition is not satisfied, perform, as said control, at least one of an alert control to notify a driver of presence of said object and a deceleration control to decelerate said host vehicle (taught as collision warning using sensor information, paragraph 0127), when it is determined that said object recognized based on said host-vehicle-object-information is present in a predetermined side collision area that has been set in a side area of said host vehicle and is coming close to said host vehicle (indicated in that sensor FOV includes areas in front of, behind and to the sides of the vehicle, paragraph 0122); and
in a case where said blocking condition is satisfied, perform, as said control, at least one of said alert control and said deceleration control (taught as collision mitigation/avoidance via autonomous braking using sensor information, paragraph 0127, which includes the supplemental received sensor data to fill in gaps of the FOV, paragraph 0141), when it is determined that said object recognized based on said host-vehicle-object-information and said external object information is present in said side collision area and is coming close to said host vehicle (indicated in that sensor FOV includes areas in front of, behind and to the sides of the vehicle, paragraph 0122).
Regarding claim 6, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 1 (see claim 1 rejection). Marsh further teaches; wherein, said control unit is configured to, when a kind of said sensor object that is present (taught as detecting objects using the sensor module, paragraph 0127) at said position where said sensor object blocks said full scan of said sensor is a vehicle (taught as detecting an occluded region in the FOV, paragraph 0139), obtain said external object information by carrying out a vehicle-to-vehicle communication with this vehicle (taught as using vehicle to vehicle communication by requesting sensor information, paragraph 0140, and receiving sensor information from other vehicles, paragraph 0141).
Regarding claims 8-9, it has been determined that no further limitations exist apart from those previously addressed in claim 1. Therefore, claims 8-9 are rejected under the same rationale as claim 1.
Regarding claim 10, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 1 (see claim 1 rejection). However, Marsh does not explicitly teach; wherein said control unit is configured to:
determine, when said distance between said host vehicle and said sensor object is equal to or shorter than said predetermined distance, whether said sensor object continues to be positioned at a blocking position where said sensor object blocks said sensor based on a relative speed of said sensor object with respect to said host vehicle; and
determine that said blocking condition is satisfied when said sensor object continues to be positioned at said blocking position.
Tanaka teaches; wherein said control unit (taught as a vehicle display control apparatus, element 200) is configured to:
determine, when said distance between said host vehicle and said sensor object is equal to or shorter than said predetermined distance (taught as determining a relative position of the obstruction relative to the vehicle, and determining when the relative position of the obstruction is within the detection target region, paragraph 0053), whether said sensor object continues [interpreted to mean the sensor/controller updates over time, such that the method to determine blocking condition is repeated] to be positioned at a blocking position where said sensor object blocks said sensor based on a relative speed of said sensor object with respect to said host vehicle (taught as determining a relative position of an obstruction relative to the vehicle, and when the relative position falls within a detection target region, paragraph 0074); and
determine that said blocking condition is satisfied when said sensor object continues to be positioned at said blocking position (taught as determining a relative position of an obstruction relative to the vehicle, and when the relative position falls within a detection target region, fulfilling a condition to provide further control/guidance, such as by presenting an image, paragraph 0074; as the control/guidance image is only presented when the condition is fulfilled, one can reasonably assume that, should the obstruction move out of the defined detection target region, that the condition, and therefore the control/guidance, would no longer be presented).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a distance threshold as taught by Tanaka in the system taught by Marsh in order to further assist user guidance. Such guidance allows for distinguishing important information, which, as suggested by Tanaka, reduces the likelihood of a traffic accident (paragraph 0034).
Regarding claim 18, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 6 (see claim 6 rejection). Marsh further teaches; wherein said control unit is further configured to:
determine whether infrastructure equipment communicable with said host vehicle is present when said sensor object present at a position where said sensor object blocks said sensor is not a vehicle (indicated in the vehicle communicating both with infrastructure in the form of a neighboring communication device [roadside unit RSU 170B] and to other vehicles [VRU B, Vehicle C], e.g. Fig 11, 1115; in other words, the source of the sensor data to be fused does not matter, so long as it can be communicated with to provide potential sensor data to resolve the blind spot), or [examiner interprets this to indicate that only one of the conditions need be satisfied to fulfill the limitations) when vehicle-to-vehicle communication with said sensor object cannot be performed,
obtain external object information from said infrastructure equipment when said infrastructure equipment is present (taught as communicating to an RSU, e.g. Fig 11, RSU 170B, which provides filtered sensor data, 1135); and
when said infrastructure equipment is not present [examiner notes that, as currently phrased, it could be interpreted to mean that V2V communication with the sensor object cannot be performed, but other vehicles in the vicinity could be communicated with], taught as communicating with vehicles in the vicinity, such as VRU B and vehicle C, Fig 11; indicating that communication without infrastructure is at least attempted) determine that said external object information cannot be obtained (indicated in the example that there are occasions that, for example, VRU [vulnerable road user] B, indicates lack of relevant sensor data, paragraph 0166, and that any sent sensor data is filtered to avoid delivering irrelevant sensor data, paragraph 0167; thus, in the case where all external object information sources are ‘irrelevant’, the finalized control would be performed without external object information).
However, Marsh does not explicitly teach; notify said driver that said blocking condition is satisfied.
Montemerlo teaches; notify said driver that said blocking condition is satisfied (taught as generating an alert to a driver if deviation values are not within the relevant threshold range, column 16 lines 19-25).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to warn a driver of a malfunctioning sensor as taught by Montemerlo in the system taught by Marsh in order to improve safety. When detecting an occlusion of an FOV, there is a chance that it can be due to sensor malfunctions, which result in deviating sensor values. By warning a driver of the deviation, one can prep a driver about the potential need to take over full control of the vehicle, as suggested by Montemerlo (column 16 lines 22-25).
Regarding claims 19-20, it has been determined that no further limitations exist apart from those previously addressed in claim 18. Therefore, claims 19-20 are rejected under the same rationale as claim 18.
Claim(s) 3, 11-12, 14-15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Marsh (US20210311183A1) as modified by Tanaka (US20210197666A1) and Montemerlo (US8718861B1), and further in view of Fukatsu (Automated Driving with Cooperative Perception Using Millimeter-wave V2I Communications for Safe and Efficient Passing Through Intersections 2021) and Schiegg (Collective Perception: A Safety Perspective, 2020).
Regarding claim 3, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 1 (see claim 1 rejection). However, Marsh does not teach; wherein,
said control unit is configured to determine that said blocking condition is satisfied when
a vehicle speed of said host vehicle is equal to or lower than a first speed threshold and
a magnitude of a relative speed of said sensor object with respect to said host vehicle is equal to or lower than a second speed threshold.
Fukatsu teaches; a vehicle speed of said host vehicle is equal to or lower than a first speed threshold.
Specifically, Fukatsu teaches; a theoretical max velocity at different communication rate averages to ensure no collisions.
PNG
media_image1.png
550
540
media_image1.png
Greyscale
Figure 1 The contour plot shows the required sensor data rate at each Vt and the color [negative slope] lines show the outage capacity realized by each carrier frequency, Fig 5 in Fukatsu
As presented in Fukatsu in the figure above, effective data rates at 60GHz would only allow a ‘safe’ interaction for a vehicle traveling at a maximum of 56 km/hr, and data rates at 30GHz would only be safe traveling at a maximum of 47km/hr. Anything faster would no longer be effective for cooperative perception. While the exact numbers aren’t necessarily relevant, the principle that there exists a maximum speed such that cooperative perception is ineffective, indicates a speed dependent trigger/threshold in which such data is relied on to maintain a collision free status. This effectively corresponds to requiring a condition such that “when a vehicle speed of said host vehicle is equal to or lower than a first speed threshold” is satisfied, wherein speed values above the presented maximum compared to the available data rates would not trigger [or fully rely on] cooperative perception.
However, Fukatsu does not explicitly suggest; a magnitude of a relative speed of said sensor object with respect to said host vehicle is equal to or lower than a second speed threshold.
Schiegg teaches; a magnitude of a relative speed of said sensor object with respect to said host vehicle is equal to or lower than a second speed threshold.
Specifically, Schiegg addresses that object detection accuracy is affected by relative speed; vehicles with higher relative speed differences may deeply enter a vehicle’s perception range before a necessary number of measurements can take place for object tracking accuracy (section 5.3.3, page 17). This effectively corresponds to a condition such that “when a magnitude of a relative speed of said sensor object with respect to said host vehicle is equal to or lower than a second speed threshold” is satisfied, wherein a relative speed of a cooperating vehicle impacts the accuracy of shared data.
One of ordinary skill in the art, as evidenced by the research of Futaba and Schiegg, would recognize that a cutoff exists such that the obtained data via cooperative sensing is ineffective or inaccurate, and that these cutoffs are dependent on vehicle speed/relative speed between vehicle, and thus implement such a cutoff before attempting to use cooperative perception data in maneuvering a vehicle in the system taught by Marsh.
Regarding claim 11, Marsh as modified by Tanaka and Montemerlo teaches;
The vehicle control apparatus according to claim 1 (see claim 1 rejection). However, Marsh does not explicitly teach; wherein said first threshold speed is set to a vehicle speed of when said host vehicle is performing creep driving [examiner notes that this merely amounts to intended use, and thus one merely needs to be capable of performing the intended function to cover the claims], and
wherein said second threshold speed is set to a predetermined value for determining that said sensor object continues to be positioned at a blocking position where said sensor object blocks said sensor.
Fukatsu teaches; wherein said first threshold speed is set to a vehicle speed of when said host vehicle is performing creep driving [examiner notes that this merely amounts to intended use, and thus one merely needs to be capable of performing the intended function to cover the claims].
While the exact numbers aren’t necessarily relevant, the principle that there exists a maximum speed such that cooperative perception is ineffective, indicates a speed dependent trigger/threshold in which such data is relied on to maintain a collision free status. This effectively corresponds to requiring a condition such that “when a vehicle speed of said host vehicle is equal to or lower than a first speed threshold” is satisfied, wherein speed values above the presented maximum compared to the available data rates would not trigger [or fully rely on] cooperative perception.
Schiegg teaches; wherein said second threshold speed is set to a predetermined value for determining that said sensor object continues to be positioned at a blocking position where said sensor object blocks said sensor
Specifically, Schiegg addresses that object detection accuracy is affected by relative speed; vehicles with higher relative speed differences may deeply enter a vehicle’s perception range before a necessary number of measurements can take place for object tracking accuracy (section 5.3.3, page 17). This effectively corresponds to a condition such that “when a magnitude of a relative speed of said sensor object with respect to said host vehicle is equal to or lower than a second speed threshold” is satisfied, wherein a relative speed of a cooperating vehicle impacts the accuracy of shared data, and a predetermined threshold difference of relative speed can be used to determine a condition for usable sensor data.
One of ordinary skill in the art, as evidenced by the research of Futaba and Schiegg, would recognize that a cutoff exists such that the obtained data via cooperative sensing is ineffective or inaccurate, and that these cutoffs are dependent on vehicle speed/relative speed between vehicle, and thus implement such a cutoff before attempting to use cooperative perception data in maneuvering a vehicle in the system taught by Marsh.
Regarding claims 12 and 15, it has been determined that no further limitations exist apart from those previously addressed in claim 3. Therefore, claims 12 and 15 are rejected under the same rationale as claim 3.
Regarding claims 14 and 17, it has been determined that no further limitations exist apart from those previously addressed in claim 11. Therefore, claims 14 and 17 are rejected under the same rationale as claim 11.
Response to Arguments
Applicant argues on pages 13-14 of the remarks that the recited prior art does not address the amended claim limitations such that two conditions are met regarding a blocking condition and whether external object information can be obtained.
The examiner agrees that the Marsh and Tanaka do not explicitly teach the amended material, and withdraws the previous rejection.
However, the examiner notes that the argument that “an alert is suppressed when the external object information can be obtained” is not currently reflected in the claim language; rather, there’s only mention of notifying a driver if there the external object information cannot be obtained in one case. This does not explicitly suppress, or only produce the recited alert when the condition is fulfilled, and rather only requires that such notification be present in that case. Thus, the case taught in Montemerlo, which provides a notification to a driver in the case of a blocked sensor condition, would still fulfill all the claimed limitations. The examiner recommends amending the claim language to further indicate the suppression of an alert.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
For further collision warning; US20210370928A1, US20160332569A1
For further side collision risk estimation; US20200282985A1
For further distance considerations for obstruction relating to the amended independent claim material; US20120323403A1
For further sensor fusion and response to occlusion, such that a driver is notified if the sensor is occluded when other sensors do not correlate with the sensor results; US9453910B2 (which teaches using supplementary sensor information to compare to a suspected blocked sensor [radar], column 8 lines 44-53, where a blocked condition would be communicated to the operator, column 12 lines 47-54; this effectively represents the idea of using sensor fusion to remedy/confirm a sensor blockage condition, and in the case where it can be compensated for, continues normal operation, but otherwise, notifies the operator and performs certain procedures, such as disabling control dependent on the sensor.)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL ANFINRUD whose telephone number is (571)270-3401. The examiner can normally be reached M-F 1:00-9:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at (571)270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GABRIEL ANFINRUD/ Examiner, Art Unit 3662
/JELANI A SMITH/ Supervisory Patent Examiner, Art Unit 3662