DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 97-112 are presented for examination.
Claims 1-96, 113-179 are preliminarily cancelled.
Claims 97-112 are rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 97-106, 109-112 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
101 Analysis – Step 1 – YES
Claim 97 is directed to “A vehicle…”, claim 103 is directed to “A method…”, and claim 109 is directed to “A method…”. Therefore, claims 97, 103, and 109 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 103 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claims 97 and 109 are rejected for the same reasons as the representative claim 103 as discussed here.
Claim 103 recites:
“A method of operating a vehicle, the method comprising: while a vehicle is traveling along a road surface, determining a location of a road surface feature on the road surface the location of the road surface feature being relative to the vehicle; and presenting, on a display, the location of the road surface feature on the road surface.”
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determining a location…”” step in the context of the claims encompasses a driver, an operator, or a person observing, checking, examining, analyzing, determining, evaluating, and judging, analyzing a bystander, speed, velocity, acceleration, deceleration, obstructions, hurdles, animals, rocks, bumps, potholes, manholes, or constructions, and locations, positions of vehicles on the road.
Examiner would also note MPEP 2106.04(a)(2)(III): The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Here, the determination is a form of making evaluation and judgement based on observation by a driver, an operator, or a bystander.
Accordingly, the claim 103 recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
“A method of operating a vehicle, the method comprising: while a vehicle is traveling along a road surface, determining a location of a road surface feature on the road surface the location of the road surface feature being relative to the vehicle; and presenting, on a display, the location of the road surface feature on the road surface.”
These “presenting, on a display, the location of the road surface feature on the road surface…” steps are insignificant extra-solution activities that merely use a processor to perform the process. In particular, the “presenting…” step amounts to mere data gathering which is a form of insignificant extra-solution activities. The “…a display; and a processor…remote sensor…” step amounts to mere post solution activities and/or instructions to apply the recited abstract ideas (e.g., a bystander, or a driver observes the speed, velocity, acceleration, deceleration, obstructions, hurdles, animals, rocks, bumps, potholes, manholes, or constructions, and locations, positions of vehicles on the road. Lastly, “vehicle”, i.e. with sensors, ECUs, controllers, and processors merely describes how to generally “apply” the otherwise mental judgements in a generic or general-purpose computer environment, where processor is recited as generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component and merely automates a determining step.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impost any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “…a display; and a processor…remote sensor…” amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations “…a display; and a processor…remote sensor…” are well-understood, routine, and conventional activities using conventional sensors. As explained, the additional elements are recited at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05; Alice Corp. v. CLS Bank, 573 U.S., 208,223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention”). Electric Power Group, LLC v, Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information for collection, analysis and display constitute insignificant extra-solution activity). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016)( wherein the vehicular sensing system tracks a moving detected object as it enters an occlusion region and predicts where the moving detected object will exit from the occlusion region.). Hence, the claim 103 is not patent eligible.
Dependent Claims
Dependent claims 98-102, 104-106, and 110-112 do not recite any further limitations that causes the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Dependent claims 98-102, 104-106, and 110-112 recite the limitation of “…the position is determined at least partially based on road surface information downloaded from a cloud-based database…wherein the display is selected from the group consisting of a heads-up display and a monitor…wherein the processor is further configured to present, on the display, a projected tire path of two front tires of the vehicle…” are furthered directed toward an abstract idea. The “…wherein presenting the location of the road surface feature comprises presenting a graphical representation of the road surface feature on the display…” are furthered directed toward an insignificant extra-solution activities. Therefore, dependent claims 98-102, 104-106, and 110-112 are not patent eligible under the same rationale as provided in the rejection of independent claims 97, 103, and 109.
As such, claims 97, 103, and 109 are rejected under 35 USC § 101 as being drawn to an abstract idea without significant more, and thus are ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 97-108 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by KALITA et al. (US Pub. No.: 2023/0108406 A1: hereinafter “KALITA”).
Consider claims 97, 103:
KALITA teaches a vehicle (Fig. 1 element V) comprising: a localization system configured to determine a location of the vehicle (See KALITA, e.g., “…determine from geolocation signals the current geolocation of the vehicle… When SLAM process is not possible (bad weather condition or dark conditions), the inertial Measurement Unit can provide precise localization of the vehicle on the road, and position of potholes can be displayed to the truck driver with a good accuracy…” of Abstract, ¶ [0078]-¶ [0088], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99); a display (Fig. 4 element 7); and a processor configured to perform (Fig. 4 element “The electronic control unit 6”) the steps of: obtaining a location of the vehicle from the localization system (See KALITA, e.g., “…determine from geolocation signals the current geolocation of the vehicle…provide precise localization of the vehicle on the road…” of Abstract, ¶ [0078]-¶ [0088], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99); determining the presence of one or more road surface features on a road surface based at least in part on the location of the vehicle (See KALITA, e.g., “…scanning, with the sensing device, an area of interest in front of and ahead of the vehicle, the area of interest including at least a surface of a road traveled by the vehicle…identifying, in the data flow, first candidate potholes formed on the road surface…processing the data flow to find out first confirmed potholes among the first candidate potholes…allocating a geolocation to each of the first confirmed potholes, and displaying, on the cartography display, first confirmed potholes with their localization superimposed on a map…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B); and presenting on the display a position of the one or more road surface features on the road surface (See KALITA, e.g., “…scanning, with the sensing device, an area of interest in front of and ahead of the vehicle, the area of interest including at least a surface of a road traveled by the vehicle…identifying, in the data flow, first candidate potholes formed on the road surface…displaying, on the cartography display, first confirmed potholes with their localization superimposed on a map…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Consider claim 98:
KALITA teaches everything claimed as implemented in the rejection of claim 97 above. In addition, KALITA teaches wherein the position is determined at least partially based on road surface information downloaded from a cloud-based database (See KALITA, e.g., “…a complete scan cycle across the field-of-view outputs a point cloud or an image. The successive point clouds or images can be used to build a rolling map, said rolling map being constructed from all the successive images resulting from the scanning process…vehicles V61 and V62 transmit their information through uplink 18 to the remote server 3…Vehicles V61 and V62 also receive data about potholes via downlink 28 from server(s) or cloud service(s)…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Consider claims 99, 104-105:
KALITA teaches everything claimed as implemented in the rejection of claims 97, 103 above. In addition, KALITA teaches wherein the display is selected from the group consisting of a heads-up display and a monitor (See KALITA, e.g., “…identifying, in the data flow, first candidate potholes formed on the road surface…displaying, on the cartography display, first confirmed potholes with their localization superimposed on a map…provided a navigation service helping a truck driver to choose the best itinerary when two or more options are possible…the navigation calculation can show various possible itineraries to go from a departure location denoted A to a destination location B. The display shows the various proposed itineraries to go from A to B, each itinerary being displayed together with a respective cumulated pothole severity rating. The location of outstanding pothole can be displayed with colors and icons…” of ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], ¶ [0133]-¶ [0134], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Consider claim 100:
KALITA teaches everything claimed as implemented in the rejection of claim 99 above. In addition, KALITA teaches wherein the processor is further configured to present, on the display, a projected tire path of at least one tire of the vehicle (Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B clearly teach “…further comprising characterizing each of the first confirmed potholes by a set of size characteristics, comprising at least one of its depth, its width, and/or its length… displaying, on the cartography display, with various icons and/or colors, each of the first and the second potholes together with a severity rating…”) relative to the one or more road surface features (See KALITA, e.g., “…scanning, with the sensing device, an area of interest in front of and ahead of the vehicle, the area of interest including at least a surface of a road traveled by the vehicle…identifying, in the data flow, first candidate potholes formed on the road surface…displaying, on the cartography display, first confirmed potholes with their localization superimposed on a map…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Consider claims 101, 106:
KALITA teaches everything claimed as implemented in the rejection of claims 99, 105 above. In addition, KALITA teaches wherein the processor controller is further configured to present, on the display, a projected tire path of two front tires of the vehicle (Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B clearly teach “…further comprising characterizing each of the first confirmed potholes by a set of size characteristics, comprising at least one of its depth, its width, and/or its length… displaying, on the cartography display, with various icons and/or colors, each of the first and the second potholes together with a severity rating…”).
Consider claim 102:
KALITA teaches everything claimed as implemented in the rejection of claim 101 above. In addition, KALITA teaches wherein the one or more road surface features comprises a pothole or a bump (See KALITA, e.g., “…detecting, localizing, and reporting potholes on a road…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Consider claim 107:
KALITA teaches everything claimed as implemented in the rejection of claim 106 above. In addition, KALITA teaches further comprising, based on the projected tire path of the at least one tire of the vehicle (Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B clearly teach “…further comprising characterizing each of the first confirmed potholes by a set of size characteristics, comprising at least one of its depth, its width, and/or its length… displaying, on the cartography display, with various icons and/or colors, each of the first and the second potholes together with a severity rating…”), adjusting a steering angle of a steering wheel of the vehicle to avoid the road surface feature (See KALITA, e.g., “…the truck driver can efficiently avoid the more severe potholes, for example by a steering correction…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Consider claim 108:
KALITA teaches everything claimed as implemented in the rejection of claim 107 above. In addition, KALITA teaches wherein the road surface feature is a pothole (See KALITA, e.g., “…detecting, localizing, and reporting potholes on a road…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 109-112 is/are rejected under 35 U.S.C. 103 as being unpatentable over HUILLE (WO Pub. No.: 2021/018660 A1: hereinafter “HUILLE”) in view of KALITA.
Consider claim 109:
HUILLE teaches a method of operating a vehicle under conditions of poor visibility (See HUILLE, e.g., “…a method for providing visual information about a first vehicle (1) in an environment (4) of a second vehicle…detecting a reduced visibility of the first vehicle (1) in the environment (4) of the second vehicle…transmitting an image captured by a camera (7) of the first vehicle (1) to a display device of the second vehicle (2) and of integrating an object (13) representing the first vehicle (1) into the image displayed on the display device of the second vehicle (2). The object (13) is represented in the image (12) at a position which corresponds to the current position of the first vehicle (1) in the environment (4) of the second vehicle (2)…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22), the method comprising: (a) while the vehicle is traveling along a road surface (e.g., the vehicles are travelling on the road, as exhibited in Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22), determining, using at least one remote sensor, a location, relative to the road surface, of at least one other vehicle (See HUILLE, e.g., “…providing visual information about at least one first vehicle in an environment of a second vehicle, comprises the step of detecting a reduced visibility of the at least one first vehicle in the environment of the second vehicle…transmitting an image captured by a camera of the at least one first vehicle to a display device of the second vehicle and of integrating an object representing the at least one first vehicle into the image displayed on the display device of the second vehicle. Herein, the object is represented in the image at a position which corresponds to the current position of the at least one first vehicle in the environment of the second vehicle…Provided that the reduced visibility of the at least one first vehicle is detected, the object which represents the at least one first vehicle is added to the image displayed on the display device. This makes the first vehicle visible to the driver of the second vehicle. And representing or showing the object in the image at the location or position which corresponds to the current position of the first vehicle in the real or actual environment enables the driver of the second vehicle to particularly well estimate a distance between the first vehicle and the second vehicle…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22).
HUILLE further teaches and (b) presenting, on a display, the determined location of the at least one other vehicle in (a) relative to an image of a section of an area ahead (See HUILLE, e.g., “…a view representing at least a section of an area ahead of the at least one first vehicle is transmitted as the image displayed on the display device of the second vehicle. The view can in particular be captured by a front camera of the at least one first vehicle. As the view representing at least the section of the area ahead of at least one first vehicle is shown on the display device of the second vehicle, the visibility of the environment is improved for the driver of the second vehicle looking at the display device of the second vehicle. This is due to the fact that the camera of the first vehicle which is travelling in front of the second vehicle can capture more details of the surroundings of the first vehicle than it is the case for a driver or for a camera of the second vehicle, having the first vehicle in his or its field of view. Therefore, utilizing the view representing at least the section of the area ahead of the first vehicle as the image displayed on the display device leads to an improved visibility of the environment for the driver of the second vehicle…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22). However, HUILLE does not explicitly teach an image of the road surface.
In an analogous field of endeavor, KALITA teaches an image of the road surface (See KALITA, e.g., “…scanning, with the sensing device, an area of interest in front of and ahead of the vehicle, the area of interest including at least a surface of a road traveled by the vehicle…identifying, in the data flow, first candidate potholes formed on the road surface…displaying, on the cartography display, first confirmed potholes with their localization superimposed on a map…” of ¶ [0005]-¶ [0020], ¶ [0022]-¶ [0038], ¶ [0049]-¶ [0051], ¶ [0078]-¶ [0088], ¶ [0100]-¶ [0110], ¶ [0117]-¶ [0130], and Figs. 1-4 elements V, RW, 4-98, Figs. 7-9 elements V1-V8, 3-99, Fig. 1 elements route 1-2, A-B).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine “…a method for providing visual information about a first vehicle (1) in an environment (4) of a second vehicle…detecting a reduced visibility of the first vehicle (1) in the environment (4) of the second vehicle…transmitting an image captured by a camera (7) of the first vehicle (1) to a display device of the second vehicle (2) and of integrating an object (13) representing the first vehicle (1) into the image displayed on the display device of the second vehicle (2). The object (13) is represented in the image (12) at a position which corresponds to the current position of the first vehicle (1) in the environment (4) of the second vehicle (2)…”, as disclosed in HUILLE with “an image of the road surface”, as taught in KALITA with a reasonable expectation of success to yield “improve detection, beyond human sight; that is to say detection is performed efficiently at night and/or at times of low visibility, whereas human sight cannot do so. Also, the promoted solution improves safety in non-illuminated tunnels”, as taught in ¶ [0012].
Consider claim 110:
The combination of HUILLE, KALITA teaches everything claimed as implemented above in the rejection of claim 109. In addition, HUILLE teaches wherein the conditions of poor visibility are caused by fog (See HUILLE, e.g., “…situation is exemplarily shown in which the first vehicle 1 is not well visible for a driver of the second vehicle 2. This can be due to the presence of fog 10 ahead and around the second vehicle 2. Due to the fog 10 not only the driver of the second vehicle 2 can hardly perceive the first vehicle 1…the probability that the first vehicle 1 disappears in the fog 10 ahead of the second vehicle 2 is reduced, as the image 12 captured when the first vehicle 1 was situated at its previous position is utilized for displaying the image 12 to the driver of the second vehicle 2…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22) and the at least one remote sensor is a radar detector (See HUILLE, e.g., “…a method for detecting bad weather conditions such as fog using laser data provided by a LIDAR unit of a trailing car, i.e. by a car driving behind another car, and by analyzing images captured by a camera of the trailing car…The sensor device can be configured as or comprise a camera and/or a laser device such as a laser scanner…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22).
Consider claim 111:
The combination of HUILLE, KALITA teaches everything claimed as implemented above in the rejection of claim 109. In addition, HUILLE teaches wherein the display is a heads-up display or a monitor (See HUILLE, e.g., “…the image 18 displayed on the screen 17. In the latter case the image 12 comprising the object in form of the three- dimensional model 13 is superimposed on the further image 18 captured by the camera 5 of the second vehicle 2…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22).
Consider claim 112:
The combination of HUILLE, KALITA teaches everything claimed as implemented above in the rejection of claim 109. In addition, HUILLE teaches wherein presenting, on the display, the determined location of the at least one other vehicle comprises presenting a graphical representation (See HUILLE, e.g., “…the image 18 displayed on the screen 17. In the latter case the image 12 comprising the object in form of the three- dimensional model 13 is superimposed on the further image 18 captured by the camera 5 of the second vehicle 2…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22) of the at least one other vehicle on the display See HUILLE, e.g., “…a view representing at least a section of an area ahead of the at least one first vehicle is transmitted as the image displayed on the display device of the second vehicle. The view can in particular be captured by a front camera of the at least one first vehicle. As the view representing at least the section of the area ahead of at least one first vehicle is shown on the display device of the second vehicle, the visibility of the environment is improved for the driver of the second vehicle looking at the display device of the second vehicle. This is due to the fact that the camera of the first vehicle which is travelling in front of the second vehicle can capture more details of the surroundings of the first vehicle than it is the case for a driver or for a camera of the second vehicle, having the first vehicle in his or its field of view…” of Abstract, Page 1:25-37, Page 2:1-37, Page 3:1-37, Page 5:1-37, Page 6:1-37, Page 7:1-37, Page 8:15-31, Page 10:19-37, Page 11:1-37, Page 12:5-27, Page 13:1-37, Page 14:1-37, Page 15:8-29, and Figs. 1-2 elements 1-20, Figs. 3-4 elements 1-14, Figs. 5-10 elements 1-22).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Levinson et al. (US Pat. No.: 10,486,485 B1) teaches “Vehicle ride can be improved using perception based suspension control on a vehicle. A computing system on the vehicle may receive a map of a road surface. The computing system may identify a trajectory of the vehicle relative to the road surface, and determine if a deformation exists in a track of one of the vehicle tires. The deformation may include a depression and/or a raised portion from the road surface. The computing system may calculate an adjustment to make to one or more suspension system components to negate or minimize the effects of the vehicle traveling over the deformation or roughness of the road, and may send an instruction to adjust the suspension components accordingly. Additionally, or alternatively, the computing system may determine that the deformation may be avoidable, in whole or in part, and may cause the vehicle to maneuver around the deformation.”
Lam et al. (US Pub. No.: 2019/0195628 A1) teaches “T Systems and methods for monitoring and assessing road conditions are disclosed herein that receive raw data comprising information indicative of road conditions from various types of remote devices. The road condition data may be normalized based on the type of the remote device and the normalized road condition data may be stored. A road condition model may be generated based on the normalized road condition data and using the device location. The road condition model may be used to identify pothole locations, rate road segments, etc. The road condition model may apply various data set weightings and learning algorithms to present an accurate result. The road condition model may be optimized based on feedback from city personnel and other sources of information to determine the accuracy of the model outputs. The road condition model may also be trained to ensure accuracy.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABAR SARWAR whose telephone number is (571)270-5584. The examiner can normally be reached on Mon-Fri 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached on (313)446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BABAR SARWAR/Primary Examiner, Art Unit 3667