DETAILED ACTION
This Office action is in response to application filed on 9/10/2024. Claim(s) 1-7 is/are pending.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“three-dimensional information generation unit” in claim(s) 1, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit such as a three-dimensional information generation unit 12”, [0023].
“three-dimensional information accumulation unit” in claim(s) 1-2, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a three-dimensional information accumulation unit 14”, [0024].
“three-dimensional information update unit” in claim(s) 1, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a three-dimensional information update unit 13”, [0024].
“specific vehicle recognition unit” in claim(s) 2, 5, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a specific vehicle recognition unit 17”, [0024].
“road surface information estimation unit” in claim(s) 2, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a road surface information estimation unit 15”, [0024].
“specific vehicle information estimation unit” in claim(s) 2, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a specific vehicle information estimation unit 18”, [0024].
“free space recognition unit” in claim(s) 2, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a free space recognition unit 16”, [0024].
“specific vehicle passable region determination unit” in claim(s) 2, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a specific vehicle passable region determination unit 19”, [0024].
“evacuation region determination unit” in claim(s) 2, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…an evacuation region determination unit 1a”, [0024].
“vehicle action plan generation unit” in claim(s) 2-4, described in Applicant’s specification as “the external environment recognition device 10 is specifically a computer including an arithmetic device such as a CPU, a storage device such as a semiconductor memory, and hardware such as a communication device. Then, the arithmetic device executes a predetermined program to realize each functional unit”, [0023], “As illustrated in FIG. 1, the external environment recognition device 10 of the present embodiment includes…a vehicle action plan generation unit 1b”, [0024].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mat et al. (US 20210365038 A1) in view of Hartmann et al. (US 20200406897 A1).
Regarding claim 1, and similarly claim 7, Ma teaches An external environment recognition device comprising:
a plurality of cameras (“the mobile platform includes one or more sensors (e.g., distance measurement device 140 of FIG. 1) configured to measure the distance between an object and the mobile platform…the distance measurement device is a stereo vision system that can provide stereo visual data, from which depth information can be determined. The stereo vision system can be stereo camera(s).”, [0041], “With reference to FIG. 10, the architecture 1000 can include a sensor layer 1010, which can include access to various sensors (e.g., LiDAR, stereo cameras”, [0043], Figs. 1, 10);
a three-dimensional information generation unit (“controller” [0044]) that generates three-dimensional information by performing stereo matching processing in each of the plurality of stereo vision regions (“With reference to FIG. 2, at block 210, the method 200 includes determining (a) three-dimensional (3D) environment information that indicates at least a portion of an environment within a proximity of the mobile platform”, [0045], “Illustratively, determining the 3D environment information can be based, on a visual sensor (e.g., a stereo camera)…stereo images produced by a stereo camera can be used to generate high-precision 3D environment information that models at least part of the environment surrounding the mobile platform (e.g., toward the direction where the mobile platform is headed).”, [0046]);
a three-dimensional information accumulation unit (“controller” [0044]) that accumulates the three-dimensional information generated during traveling of the host vehicle in time series (Ma teaches generating “3D representation of at least a portion of the environment” where the “real-time environment information can be based on sensor data” [0020] and further that “As the mobile platform is navigating, it can continuously update the 3D environment information based on newly-collected sensor”, [0046], Thus the ”3D environment information” of Ma is accumulated in time series); and
a three-dimensional information update unit (“controller” [0044]) that updates the three-dimensional information accumulated in the three-dimensional information accumulation unit using three-dimensional information newly generated by the three-dimensional information generation unit (“As the mobile platform is navigating, it can continuously update the 3D environment information based on newly-collected sensor data (i.e., new stereo images, new point clouds, or the like).”, [0046]).
Ma does not explicitly teach the “three-dimensional information generation unit”, “three-dimensional information accumulation unit”, and “three-dimensional information update unit”. However, it would have been obvious to one of ordinary skill in the art before the effective filing date to make the “controller” [0044] separate components since it has been held that constructing a formerly integral structure in various elements involves only routine skill in the art. In re Dulberg, 289 F.2d 522, 523, 129 USPQ 348, 349 (CCPA 1961) (The claimed structure, a lipstick holder with a removable cap, was fully met by the prior art except that in the prior art the cap is "press fitted" and therefore not manually removable. The court held that "if it were considered desirable for any reason to obtain access to the end of [the prior art’s] holder to which the cap is applied, it would be obvious to make the cap removable for that purpose.").
Further, Ma does not explicitly teach
a plurality of cameras installed in such a way to have a plurality of stereo vision regions in which at least a part of a visual field region overlaps around a host vehicle.
However, Ma teaches “stereo images produced by a stereo camera can be used to generate high-precision 3D environment information that models at least part of the environment surrounding the mobile platform (e.g., toward the direction where the mobile platform is headed)” [0046].
Further, Hartmann teaches
a plurality of cameras installed in such a way to have a plurality of stereo vision regions in which at least a part of a visual field region overlaps around a host vehicle (“FIG. 1 shows the capture areas 1a-1d, 2, 3 of a camera system arranged in or respectively on a first vehicle E…The camera system of the first vehicle E includes three different camera subsystems 1, 2, 3: a surround view system 1 comprising four individual camera sensors with wide-angled capture areas 1a-1d which, together, make it possible to capture a 360° view of the vehicle, a front-facing camera having a forward-facing capture area 2 and a rearview camera having a backward-facing capture area 3.”, [0074], “An advantageous embodiment uses digital image processing and machine learning algorithms, with the objective of robustly detecting roadway reflections in conjunction with substances covering a roadway potentially being whirled up such as water or snow, in order to recognize roadway conditions such as dry, wet, snowy, icy and hazardous situations such as, for example, aquaplaning. The method is suitable in extracts both for mono, stereo”, [0094], Fig. 1).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the invention of Ma with the teachings of Hartmann such that the plurality of stereo cameras of Ma comprise additional stereo cameras such that stereo vision regions have fields of view that overlap, as suggested by Hartmann, with a reasonable expectation of success. The motivation for doing so would be “to capture a 360° view of the vehicle” [0074], as taught by Hartmann.
Claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mat et al. (US 20210365038 A1) in view of Hartmann et al. (US 20200406897 A1) in view of Motoyama (US 20220340130 A1).
Regarding claim 2, Ma in view of Hartmann teaches The external environment recognition device according to claim 1, and Ma further teaches further comprising:
a specific vehicle recognition unit that recognizes a specific vehicle to be controlled among other vehicles around a host vehicle using an image acquired by at least one of the cameras (“by analyzing a single frame of image as shown in FIG. 6, the controller can detect…moving obstacles 706, and direction signs 708 using applicable pattern recognition and/or machine learning techniques.”, [0053]);
Motoyama teaches
a road surface information estimation unit (“map analysis unit 151”, Fig. 1) that estimates a road surface shape based on the three-dimensional information accumulated in the three-dimensional information accumulation unit (“The map analysis unit 151 performs processing of analyzing various maps stored in the storage unit 111 while using data or a signal from each unit of the vehicle control system 100 such as the self-position estimation unit 132 and the outside-vehicle information detection unit 141 as necessary, and constructs a map that contains information necessary for processing of automated driving.”, [0102]);
a specific vehicle information estimation unit (“vehicle detection unit 211”, Fig. 6) that estimates a position and a size of the specific vehicle based on an image acquired by the camera and a road surface shape estimated by the road surface information estimation unit (“The vehicle detection unit 211 performs image recognition based on an image (including a two-dimensional image and a stereo image) captured by the camera 201 to detect a region of a vehicle in the image, and outputs a detection result indicating the region of the vehicle to the situation recognition unit 153.”, [0127], “The vehicle tracking unit 232 acquires, in chronological order, information regarding a region of a vehicle in an image supplied from the vehicle detection unit 211, tracks movements of all preceding vehicles and oncoming vehicles including an emergency vehicle, and outputs a tracking result to the situation prediction unit 154.”, [0142], see also “width of the emergency vehicle”, [0390] and [0277-0279]);
a free space recognition unit (“attribute recognition unit 212”, Fig. 6) that recognizes a free space in which a vehicle is allowed to travel based on the three-dimensional information accumulated in the three-dimensional information accumulation unit (“in order to implement the evacuation mode, as illustrated in a central part of FIG. 7, it is necessary to set, in a travelable region Z2, an available-for-evacuation region constituted by regions Z11 indicating evacuation spaces available for safe evacuation, and a dangerous region (unavailable-for-evacuation region) Z3 that is not available for safe evacuation, in addition to a lane region Z12, which is a traveling lane.”, [0168], “when an image as illustrated in the left part of FIG. 7 is captured by the camera 201, the attribute recognition unit 212 outputs, as image attribute information, an object recognition result constituted by an available-for-evacuation region constituted by the region Z12 and the regions Z11 excluding the region Z3, and the dangerous region (unavailable-for-evacuation region) Z3 for the region Z2 as a travelable region as illustrated in the right part of FIG. 7.”, [0177], Figs. 4, 7, 15 where the “travelable region Z2” corresponds to Applicant’s “free space”);
a specific vehicle passable region determination unit (“planning unit 134”, Fig. 5) that determines a specific vehicle passable region through which the specific vehicle is allowed to pass in the free space based on a position and a size of the specific vehicle estimated by the specific vehicle information estimation unit (“in the case of FIG. 15, the planning unit 134 sets, as a cleared region Z31, a region including a region having the width W12 on the traveling lane side for allowing the emergency vehicle EC to pass, in the available-for-evacuation region in the evacuation space map.”, [0244] “when setting the cleared region Z31, the planning unit 134 sets the cleared region Z31 constituted by the region having the width W12 for the emergency vehicle EC such that a region recognized as the region Z12 constituted by a traveling lane is included, for example, on the basis of the image attribute information.”, [0245], Fig. 15, where the “cleared region Z31” corresponds to Applicant’s “free space”);
an evacuation region determination unit (“attribute recognition unit 212”, Fig. 6) that determines an evacuation region in which the host vehicle evacuates based on the specific vehicle passable region and the free space (“as illustrated in FIG. 15, all three types of regions, that is, an obstacle region Z1, a traveling lane region Z12, and a region Z11 constituted by a road shoulder or the like that is available for evacuation, are set on the right side with respect to the forward direction of the own vehicle CS, for example.”, [0239], “when an image as illustrated in the left part of FIG. 7 is captured by the camera 201, the attribute recognition unit 212 outputs, as image attribute information, an object recognition result constituted by an available-for-evacuation region constituted by the region Z12 and the regions Z11 excluding the region Z3, and the dangerous region (unavailable-for-evacuation region) Z3 for the region Z2 as a travelable region as illustrated in the right part of FIG. 7.”, [0177], Fig. 15, where the “region Z11” corresponds to Applicant’s “evacuation region”); and
a vehicle action plan generation unit (“planning unit 134”, Fig. 5) that generates an action plan of the host vehicle based on the evacuation region (“At the time of emergency, the own vehicle CS clears, for an emergency vehicle, for example, the lane region Z12, which is a traveling lane, in the available-for-evacuation region constituted by the regions Z11 and Z12, sets an evacuation space that is available for safely pulling over, and pulls over.”, [0069], “the planning unit 134 searches for an evacuation space on the basis of information of the evacuation space map supplied together, generates operation control information for safely pulling over the vehicle to the searched evacuation space, and outputs the operation control information to the operation control unit 135”, [0155]).
Motoyama does not explicitly teach the “free space recognition unit” and “evacuation region determination unit”. However, it would have been obvious to one of ordinary skill in the art before the effective filing date to make the “attribute recognition unit 212” (Fig. 6) separate components since it has been held that constructing a formerly integral structure in various elements involves only routine skill in the art. In re Dulberg, 289 F.2d 522, 523, 129 USPQ 348, 349 (CCPA 1961) (The claimed structure, a lipstick holder with a removable cap, was fully met by the prior art except that in the prior art the cap is "press fitted" and therefore not manually removable. The court held that "if it were considered desirable for any reason to obtain access to the end of [the prior art’s] holder to which the cap is applied, it would be obvious to make the cap removable for that purpose.").
Motoyama does not explicitly teach the “specific vehicle passable region determination unit” and “vehicle action plan generation unit”. However, it would have been obvious to one of ordinary skill in the art before the effective filing date to make the “planning unit 134” (Fig. 5) separate components since it has been held that constructing a formerly integral structure in various elements involves only routine skill in the art. In re Dulberg, 289 F.2d 522, 523, 129 USPQ 348, 349 (CCPA 1961) (The claimed structure, a lipstick holder with a removable cap, was fully met by the prior art except that in the prior art the cap is "press fitted" and therefore not manually removable. The court held that "if it were considered desirable for any reason to obtain access to the end of [the prior art’s] holder to which the cap is applied, it would be obvious to make the cap removable for that purpose.").
Further, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the invention of Ma in view of Hartmann with the teachings of Motoyama such that the external environment recognition device of Ma is further configured to determine an evacuation region and control the host vehicle based on the evacuation region, as suggested by Motoyama, with a reasonable expectation of success. The motivation for doing so would be to “determine[] whether or not the road shoulder is available for pulling over, and the vehicle is pulled over if the road shoulder is available for pulling over” [0004], as taught by Motoyama.
Regarding claim 3, Ma in view of Hartmann and Motoyama teaches The external environment recognition device according to claim 2, and Motoyama further teaches further comprising a traffic rule database (“traffic rule recognition unit 152”, Fig. 1) in which traffic rules are registered (“The traffic rule recognition unit 152 performs processing of recognizing traffic rules around the own vehicle on the basis of data or a signal from each unit of the vehicle control system 100 such as the self-position estimation unit 132, the outside-vehicle information detection unit 141, and the map analysis unit 151. By this recognition processing, for example, the position and state of a signal around the own vehicle, contents of traffic regulations around the own vehicle, a lane available for traveling, and the like are recognized.”, [0103]),
wherein the vehicle action plan generation unit generates the action plan in accordance with the traffic rules (“The situation prediction unit 154 supplies data indicating a result of the prediction processing, together with data from the traffic rule recognition unit 152 and the situation recognition unit 153, to the route planning unit 161, the action planning unit 162, and the operation planning unit 163 of the planning unit 134, and the like.”, [0109]).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of Ma in view of Hartmann with the teachings of Motoyama such that the external environment recognition device of Ma is further configured to control the host vehicle in accordance with stored traffic rules, as suggested by Motoyama, with a reasonable expectation of success. The motivation for doing so would be to “determine[] whether or not the road shoulder is available for pulling over, and the vehicle is pulled over if the road shoulder is available for pulling over” [0004] while considering local traffic rules around the vehicle [0103], as taught by Motoyama.
Regarding claim 4, Ma in view of Hartmann and Motoyama teaches The external environment recognition device according to claim 2, and Motoyama further teaches further comprising a map database (“storage unit 111”, Fig. 1) in which road information is registered (“the storage unit 111 stores map data such as a three-dimensional high definition map such as a dynamic map, a global map that is less accurate than the high definition map and covers a wider area, and a local map that contains information about surroundings of the own vehicle.”, [0094]),
wherein the vehicle action plan generation unit generates the action plan in accordance with the road information (“A configuration for generating an evacuation space map, specifying an evacuation space on the basis of the evacuation space map in the event of an emergency, and pulling over the vehicle is constituted by…the storage unit 111”, [0120], see also [0102]).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of Ma in view of Hartmann with the teachings of Motoyama such that the external environment recognition device of Ma is further configured to control the host vehicle in accordance with stored road information, as suggested by Motoyama, with a reasonable expectation of success. The motivation for doing so would be to “determine[] whether or not the road shoulder is available for pulling over, and the vehicle is pulled over if the road shoulder is available for pulling over” [0004] while considering diverse map data where the vehicle travels [0102], as taught by Motoyama.
Claim(s) 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mat et al. (US 20210365038 A1) in view of Hartmann et al. (US 20200406897 A1) in view of Motoyama (US 20220340130 A1) in view of Foster et al. (US 20230140569 A1).
Regarding claim 5, Ma in view of Hartmann and Motoyama teaches The external environment recognition device according to claim 2,
Motoyama teaches “The emergency vehicle identification unit 231 identifies whether an emergency vehicle is approaching on the basis of…information regarding a region of a vehicle in an image supplied from the vehicle detection unit 211” [0139], but does not explicitly teach what features in the image are used to identify the emergency vehicle.
However, Foster teaches
wherein the specific vehicle recognition unit recognizes a specific vehicle in emergency travel based on presence or absence of blinking of a rotating light in the image (“Cameras included in the vehicle sensor subsystems 144 may be rear facing so that flashing lights from emergency vehicles may be observed from all around the autonomous truck 105. These cameras may include video cameras, cameras with filters for specific wavelengths, as well as any other cameras suitable to detect emergency vehicle lights based on color, flashing, of both color and flashing.”, [0338]).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the invention of Ma in view of Hartmann and Motoyama with the teachings of Foster such that the specific vehicle recognition unit of Ma is further configured to recognize a traveling emergency vehicle based on detecting emergency vehicle lighting, as suggested by Foster, with a reasonable expectation of success. The motivation for doing so would be “to yield to any approaching emergency vehicle that has activated its siren and/or emergency lights” [0928], as taught by Foster.
Regarding claim 6, Ma in view of Hartmann and Motoyama teaches The external environment recognition device according to claim 2,
However, Foster teaches
wherein the specific vehicle recognition unit recognizes a bus traveling on a bus priority road as the specific vehicle (“the in-vehicle control computer 150 can be configured to detect that a vehicle is a school bus in response to determining that the vehicle's color is yellow (national school bus glossy yellow), the words “school bus” appear on the front and end of the vehicle, and/or flashing amber lights are located on the front and rear of the vehicle.”, [0798], “The in-vehicle control computer 150 can be configured to detect a school bus and its associated lane from a predetermined minimum distance.”, [0801], see also “A temporary bus lane is an emergency lane that serves as a driving lane dedicated to buses.”, [0840], “The autonomous vehicle 105 can be configured to perform a number of different tasks related to the physical infrastructure on or near the roadway. Examples of physical infrastructure that the autonomous vehicle 105 can be configured to detect and respond to include:…emergency lanes”, [0906], “the in-vehicle control computer 150 is configured to establish exclusion zones defining a minimum distance that the autonomous vehicle 105 is configured to stay away from any emergency lane vehicles (ELVs).”, [0909])
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the invention of Ma in view of Hartmann and Motoyama with the teachings of Foster such that the specific vehicle recognition unit of Ma is further configured to recognize a bus traveling in a bus lane, as suggested by Foster, with a reasonable expectation of success. The motivation for doing so would be “to maintain a predetermined minimum safe distance when travelling in the same direction with the school bus” [0808], as taught by Foster.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: See Notice of References Cited.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMELIA VORCE whose telephone number is (313) 446-4917. The examiner can normally be reached on Monday-Friday, 9AM-6PM, Central Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMELIA VORCE/ Primary Examiner, Art Unit 3666