DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “detection system” and “driving assistance device” in claim 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 20 refers to “the location information of the pedestrian indicating whether the pedestrian is located at left or right” in lines 12-13; however, claim 20 previously introduces “location information of the pedestrian” in line 4 and previously refers to “the location information of the pedestrian” in each of lines 5-6, lines 9-10, and lines 10-11, and it is unclear whether the “location information of the pedestrian indicating whether the pedestrian is located at left or right” in lines 12-13 is intended to be the same as or different from the “location information of the pedestrian” in line 4 and/or in lines 5-6 and/or in lines 9-10 and/or in lines 10-11. Thus, there is improper antecedent basis for the limitation in the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 9, 12-14, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by KR 10-2022-0132367 A to Maeng (hereinafter: “Maeng”).
With respect to claim 1, Maeng teaches a driving assistance device (100 & 200 together) of a vehicle (2), comprising: a transceiver configured to receive image data from a surveillance camera (110) having a constant sensing field of view within a child protection zone [for example, as depicted by at least Figs. 1, 2 & 9 and as discussed by at least ¶ 0001-0008, 0013, 0017-0020, 0024-0027, 0047-0050, 0055 & 0065, a control unit 150 of the school zone monitoring apparatus 100 of the “driving assistance device” is shown (e.g., see Fig. 9) to be in two-way communication with the closed circuit television (CCTV) camera 110 so as to be definable as a “transceiver” of the “driving assistance device,” where the control unit 150 is structured to perform functions to receive a school zone image (e.g., “image data”) obtained by the CCTV camera 110, where the CCTV camera 110 obtains the school zone image by photographing a preset area within a child protection zone]; and a processor (250) configured to recognize a pedestrian based on the image data received from the surveillance camera [for example, as depicted by at least Figs. 1-9 and as discussed by at least ¶ 0008-0009, 0013, 0019-0022, 0028, 0030-0032, 0041-0042, 0051, 0053-0055, 0060-0061 & 0063, the control unit 250 of the vehicle monitoring apparatus 200 of the “driving assistance device” is structured to perform functions to receive a caution signal (or a warning signal) from the school zone monitoring apparatus 100, where the caution signal (or the warning signal) is generated by the control unit 150 based on the school zone image and enables the control unit 250 to recognize a pedestrian in the preset area], generate location information indicating that the pedestrian is located at left or right of the vehicle based on a driving direction of the vehicle when the pedestrian is recognized in the image data received from the surveillance camera [for example, as depicted by at least Figs. 1-9 and as discussed by at least ¶ 0008, 0021-0023, 0041-0044, 0055, 0056-0057, 0059 & 0063, the control unit 250 is structured to perform functions synthesize a preset pedestrian character image at an approximate position (e.g., an approximate left position) (e.g., “location information”) on an image in front of the vehicle 2 when the pedestrian is in front of the vehicle 2 (e.g., when the pedestrian is on a left side of a vehicle front) (e.g., “indicating that the pedestrian is located at left or right of the vehicle”), where the image in front of the vehicle 2 is acquired by a black box camera 210 while the vehicle 2 is running (e.g., “based on a driving direction of the vehicle”) based on receiving of the caution signal (or the warning signal) (which includes pedestrian location information) (e.g., “when the pedestrian is recognized in the image data received from the surveillance camera”)], and control a display device of the vehicle to display the location information of the pedestrian indicating whether the pedestrian is located at the left or right of the vehicle [as depicted by at least Figs. 1, 2 & 9 and as discussed by at least ¶ 0008, 0022-0023, 0043-0044, 0056, 0059 & 0063, the control unit 250 is structured to perform functions to control a heads-up display (HUD) 230 (or an in-vehicle display device) (e.g., “display device”) of the vehicle 2 to display the preset pedestrian character image synthesized at the approximate position (e.g., the approximate left position) on the image in front of the vehicle 2].
With respect to claim 2, Maeng teaches the driving assistance device according to claim 1, wherein the processor is configured to determine whether the pedestrian is located at the left or right of the vehicle based on metadata of the image data received from the surveillance camera and driving information of the vehicle (as discussed by at least ¶ 0019-0020, 0026-0032, 0036-0044, 0047-0050 & 0057-0059).
With respect to claim 12, Maeng teaches the driving assistance device according to claim 1, wherein the image data is received from the surveillance camera through vehicle-to-everything (V2X) communication (e.g., via 140 & 240).
With respect to claim 13, Maeng teaches the driving assistance device according to claim 1, wherein the display device comprises at least one of a cluster display, a center fascia display, or a head-up display (HUD) (HUD 230, as discussed in detail above with respect to claim 1; because a cluster display, a center fascia display, and a HUD are recited in the alternative, it is sufficient to address one of the claimed alternatives).
With respect to claim 14, Maeng teaches a method performed by a driving assistance device (100 & 200 together) of a vehicle (2) configured to communicate with a surveillance camera (110), the driving assistance method comprising: determining whether a vehicle (2) enters or is located in a child protection zone [for example, as depicted by at least Figs. 1-9 and as discussed by at least ¶ 0001-0009, 0017-0020, 0026-0027, 0031-0032, 0047, 0055 & 0065, the vehicle 2 is determined to be within a child protection zone; because determining whether a vehicle enters a child protection zone and determining whether a vehicle is located in a child protection zone are recited in the alternative, it is sufficient to address one of the claimed alternatives); receiving image data from the surveillance camera having a sensing field of view within the child protection zone when it is determined that the vehicle enters or is located in the child protection zone [the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met (e.g., see: MPEP 2111.04_II), and “receiving image data from the surveillance camera having a sensing field of view within the child protection zone” would not necessarily be performed as a step of the claimed method at times including when the condition “when it is determined that the vehicle enters or is located in the child protection zone” is not met during performing of the claimed method (e.g., when “determining whether a vehicle enters or is located in a child protection zone” differently results in no determination that the vehicle enters or is located in the child protection zone), such that “receiving image data from the surveillance camera having a sensing field of view within the child protection zone when it is determined that the vehicle enters or is located in the child protection zone” does not necessarily further limit the scope of the claimed method under a broadest reasonable interpretation—note: to avoid interpretation of a contingent limitation in this instance, the examiner suggests amending “determining whether a vehicle enters or is located in a child protection zone” to instead recite something like --determining that a vehicle enters or is located in a child protection zone--; even so, for example, as depicted by at least Figs. 1, 2 & 9 and as discussed by at least ¶ 0001-0008, 0013, 0017-0020, 0024-0027, 0047-0050, 0055 & 0065, a school zone image (e.g., “image data”) is received the CCTV camera 110 at times including when the vehicle 2 is determined to be located in a preset area within the child protection zone, where the CCTV camera 110 obtains the school zone image by photographing the preset area]; recognizing a pedestrian based on the image data received from the surveillance camera [“recognizing a pedestrian based on the image data received from the surveillance camera” would not necessarily be performed as a step of the claimed method at times including when “receiving image data from the surveillance camera having a sensing field of view within the child protection zone” is not performed as a step of the claimed method at times including when the condition “when it is determined that the vehicle enters or is located in the child protection zone” is not met during performing of the claimed method, such that “recognizing a pedestrian based on the image data received from the surveillance camera” does not necessarily further limit the scope of the claimed method under broadest reasonable interpretation (e.g., see: MPEP 2111.04_II, as discussed in detail above); even so, for example, as depicted by at least Figs. 1-9 and as discussed by at least ¶ 0008-0009, 0013, 0019-0022, 0028, 0030-0032, 0041-0042, 0051, 0053-0055, 0060-0061 & 0063, a pedestrian is recognized in the preset area based on the school zone image]; generating location information of the pedestrian indicating whether the pedestrian is located at left or right of the vehicle based on a driving direction of the vehicle when the pedestrian is recognized in the image data received from the surveillance camera [“generating location information of the pedestrian indicating whether the pedestrian is located at left or right of the vehicle based on a driving direction of the vehicle” would not necessarily be performed as a step of the claimed method at times including when “receiving image data from the surveillance camera having a sensing field of view within the child protection zone; [and] recognizing a pedestrian based on the image data received from the surveillance camera” are not performed as steps of the claimed method at times including when the condition “when it is determined that the vehicle enters or is located in the child protection zone” is not met during performing of the claimed method, such that “generating location information of the pedestrian indicating whether the pedestrian is located at left or right of the vehicle based on a driving direction of the vehicle when the pedestrian is recognized in the image data received from the surveillance camera” does not necessarily further limit the scope of the claimed method under a broadest reasonable interpretation (e.g., see: MPEP 2111.04_II, as discussed in detail above); even so, for example, as depicted by at least Figs. 1-9 and as discussed by at least ¶ 0008, 0021-0023, 0041-0044, 0055, 0056-0057, 0059 & 0063, an approximate position (e.g., an approximate left position) (e.g., “location information”) is synthesized (e.g., “generating”) on an image in front of the vehicle 2 when the pedestrian is in front of the vehicle 2 (e.g., when the pedestrian is on a left side of a vehicle front) (e.g., “indicating that the pedestrian is located at left or right of the vehicle”), where the image in front of the vehicle 2 is acquired by a black box camera 210 while the vehicle 2 is running (e.g., “based on a driving direction of the vehicle”) based on receiving of a caution signal (or a warning signal) (which includes pedestrian location information) (e.g., “when the pedestrian is recognized in the image data received from the surveillance camera”)]; and outputting the location information of the pedestrian indicating whether the pedestrian is located at the left or right of the vehicle [“outputting the location information of the pedestrian indicating whether the pedestrian is located at the left or right of the vehicle” would not necessarily be performed as a step of the claimed method at times including when “receiving image data from the surveillance camera having a sensing field of view within the child protection zone; recognizing a pedestrian based on the image data received from the surveillance camera; [and] generating location information of the pedestrian indicating whether the pedestrian is located at left or right of the vehicle based on a driving direction of the vehicle when the pedestrian is recognized in the image data received from the surveillance camera” are not performed as steps of the claimed method at times including when the condition “when it is determined that the vehicle enters or is located in the child protection zone” is not met during performing of the claimed method, such that “outputting the location information of the pedestrian indicating whether the pedestrian is located at the left or right of the vehicle” does not necessarily further limit the scope of the claimed method under a broadest reasonable interpretation (e.g., see: MPEP 2111.04_II, as discussed in detail above); even so, for example, as depicted by at least Figs. 1, 2 & 9 and as discussed by at least ¶ 0008, 0022-0023, 0043-0044, 0056, 0059 & 0063, the preset pedestrian character image synthesized at the approximate position (e.g., the approximate left position) on the image in front of the vehicle 2 is outputted to a heads-up display (HUD) 230 (or an in-vehicle display device) of the vehicle 2].
With respect to claim 20, Maeng teaches a child protection zone blind spot warning system (100 & 200 together) comprising: a detection system configured to recognize a pedestrian based on image data received from a surveillance camera (110) having a sensing field of view within a child protection zone, acquire location information of the pedestrian based on the image data received from the surveillance camera, and transmit the location information of the pedestrian to a vehicle entering into or travelling in the child protection zone through V2X communication [for example, as depicted by at least Figs. 1-9 and as discussed by at least ¶ 0001-0009, 0013, 0017-0022, 0024-0028, 0030-0032, 0041-0042, 0047-0051, 0053-0057, 0060-0061, 0063 & 0065, a control unit 150 of the school zone monitoring apparatus 100 of the “child protection zone blind spot warning system” is structured to perform functions to recognize a pedestrian based on a school zone image (e.g., “image data”) obtained by the CCTV camera 110 by photographing a preset area within a child protection zone, acquire pedestrian location information based on the school zone image, and transmit the pedestrian location information, via a caution signal (or a warning signal), to the vehicle 2, from a communication unit 140 to a communication unit 240 (e.g., “through V2X communication”)]; and a driving assistance device comprised in the vehicle entering into or travelling in the child protection zone and configured to receive the location information of the pedestrian through the V2X communication and display the location information of the pedestrian, wherein the location information of the pedestrian indicating whether the pedestrian is located at left or right is generated based on a driving direction of the vehicle [for example, as depicted by at least Figs. 1, 2 & 9 and as discussed by at least ¶ 0008, 0022-0023, 0041-0044, 0055-0057, 0059 & 0063, a control unit 250 of the vehicle monitoring apparatus 200 of the “child protection zone blind spot warning system” is structured to perform functions to receive the pedestrian location information via the caution signal (or the warning signal) transmitted from the school zone monitoring apparatus 100, via the communication unit 240 from the communication unit 140, synthesize a preset pedestrian character image at an approximate position (e.g., an approximate left position) on an image in front of the vehicle 2 when the pedestrian is in front of the vehicle 2 (e.g., when the pedestrian is on a left side of a vehicle front), and control a heads-up display (HUD) 230 (or an in-vehicle display device) of the vehicle 2 to display the preset pedestrian character image synthesized at the approximate position (e.g., the approximate left position) on the image in front of the vehicle 2, where the image in front of the vehicle 2 is acquired by a black box camera 210 while the vehicle 2 is running based on receiving of the caution signal (or the warning signal) (which includes the pedestrian location information)].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3, 4, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Maeng in view of U.S. Patent Application Publication No. 2024/0251059 to Tamagawa (hereinafter: “Tamagawa”).
With respect to claim 3, Maeng teaches the driving assistance device according to claim 2; however, Maeng appears to lack a clear teaching as to whether the processor is configured to: horizontally flip an image acquired from the image data received from the surveillance camera, generate the location information of the pedestrian indicating that the pedestrian is located at the left of the vehicle when the pedestrian is located at left in the horizontally flipped image, and generate the location information of the pedestrian indicating that the pedestrian is located at the right of the vehicle when the pedestrian is located at right in the horizontally flipped image; and confirm the location information of the pedestrian when a direction in which the surveillance camera faces matches the driving direction of the vehicle.
Tamagawa teaches an analogous driving assistance device including a processor is configured to: horizontally flip an image acquired from image data received from a surveillance camera, generate location information of a pedestrian indicating that the pedestrian is located at the left of the vehicle when the pedestrian is located at left in the horizontally flipped image, and generate the location information of the pedestrian indicating that the pedestrian is located at the right of the vehicle when the pedestrian is located at right in the horizontally flipped image; and confirm the location information of the pedestrian when a direction in which the surveillance camera faces matches a driving direction of the vehicle (as depicted by at least Figs. 1-6 and as discussed by at least ¶ 0067-0085).
It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the driving assistance device of Maeng with the teachings of Tamagawa, if even necessary, such that the processor is configured to: horizontally flip an image acquired from the image data received from the surveillance camera, generate the location information of the pedestrian indicating that the pedestrian is located at the left of the vehicle when the pedestrian is located at left in the horizontally flipped image, and generate the location information of the pedestrian indicating that the pedestrian is located at the right of the vehicle when the pedestrian is located at right in the horizontally flipped image; and confirm the location information of the pedestrian when a direction in which the surveillance camera faces matches the driving direction of the vehicle, to beneficially enable the image data to be displayed to a remote operator in a manner that enables the remote operator to immediately grasp a traffic situation while instantaneously grasping a traveling direction.
With respect to claim 4, Maeng modified supra teaches the driving assistance device according to claim 2, wherein the processor is configured to horizontally flip the location information of the pedestrian when a direction in which the surveillance camera faces is opposite to the driving direction of the vehicle, and control the display device to display the horizontally flipped location information of the pedestrian (as discussed in detail above with respect to claim 3).
With respect to claim 15, Maeng teaches the method according to claim 14, wherein the generating of the location information of the pedestrian whether the pedestrian is located at the left or right of the vehicle comprises horizontally flipping an image acquired from the image data received from the surveillance camera, generate the location information of the pedestrian indicating that the pedestrian is located at the left of the vehicle when the pedestrian is located at left in the horizontally flipped image, generate the location information of the pedestrian indicating that the pedestrian is located at the right of the vehicle when the pedestrian is located at right in the horizontally flipped image, and confirming the location information of the pedestrian when a direction in which the surveillance camera faces matches the driving direction of the vehicle (as discussed in detail above with respect to claim 3).
With respect to claim 16, Maeng modified supra teaches the method according to claim 15, wherein the generating of the location information of the pedestrian indicating whether the pedestrian is located at the left or right of the vehicle comprises horizontally flipping the location information of the pedestrian when the direction in which the surveillance camera faces is opposite to the driving direction of the vehicle, and controlling a display device to display the horizontally flipped location information of the pedestrian (as discussed in detail above with respect to claim 3).
Claims 5-9 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Maeng in view of Tamagawa, and in view of KR 1020210158465 A to Mun et al. (hereinafter: “Mun”).
With respect to claim 5, Maeng modified supra teaches the driving assistance device according to claim 2; however, Maeng appears to lack a clear teaching as to whether the location information of the pedestrian further includes a distance between the vehicle and the pedestrian.
Mun teaches an analogous driving assistance device which determines a distance between a vehicle and a pedestrian as location information (as discussed by at least ¶ 0013-0014, 0023-0024, 0034, 0043-0045 & 0054-0067).
It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the driving assistance device of Maeng with the teachings of Mun such that the location information of the pedestrian further includes a distance between the vehicle and the pedestrian because Mun further teaches that the distance between the vehicle and the pedestrian is beneficially usable to implement automated emergency braking (AEB) of the vehicle in the child protection zone to prevent an accident between the vehicle and the pedestrian (as discussed by at least ¶ 0058-0067 of Mun).
With respect to claim 6, Maeng modified supra teaches the driving assistance device according to claim 5; however, Maeng appears to lack a clear teaching as to whether the processor is configured to determine a possibility of collision based on the distance between the vehicle and the pedestrian and a driving speed of the vehicle, and control the vehicle based on the possibility of the collision.
Mun further teaches that a processor of the analogous driving assistance device is configured to determine a possibility of collision based on the distance between the vehicle and the pedestrian and a driving speed of the vehicle, and control the vehicle based on the possibility of the collision (as discussed by at least ¶ 0013-0014, 0023-0024, 0034, 0043-0045 & 0054-0067).
It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the driving assistance device of Maeng with the teachings of Mun such that the processor is configured to determine a possibility of collision based on the distance between the vehicle and the pedestrian and a driving speed of the vehicle, and control the vehicle based on the possibility of the collision because Mun further teaches that the distance between the vehicle and the pedestrian and the driving speed of the vehicle are beneficially usable to determine a possibility of collision between the vehicle and the pedestrian and to implement the AEB of the vehicle in the child protection zone to prevent the collision between the vehicle and the pedestrian (as discussed by at least ¶ 0058-0067 of Mun).
With respect to claim 7, Maeng modified supra teaches the driving assistance device according to claim 6; however, Maeng appears to lack a clear teaching as to whether the processor is configured to control the display device to display a collision risk warning when the possibility of the collision is equal to or higher than a first risk level and less than a second risk level.
Mun further teaches that the processor of the analogous driving assistance device is configured to control the display device to display a collision risk warning when the possibility of the collision is equal to or higher than a first risk level and less than a second risk level [as discussed by at least ¶ 0013-0014, 0023-0024, 0034, 0043-0045 & 0054-0067; for example, a head up display (HUD) displays a warning (e.g., “collision risk warning”) when a distance between the vehicle and the pedestrian is greater than a first distance but less than a second distance larger than the first distance, where AEB is implemented when the distance between the vehicle and the pedestrian less than the first distance (e.g., “when the possibility of the collision is equal to or higher than a first risk level and less than a second risk level”)].
It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the driving assistance device of Maeng with the teachings of Mun such that the processor is configured to control the display device to display a collision risk warning when the possibility of the collision is equal to or higher than a first risk level and less than a second risk level because Mun further teaches that implementing the AEB of the vehicle when a distance-based risk level is relatively high and to instead display a warning, via a HUD, when the distance-based risk level is relatively low to prevent the collision between the vehicle and the pedestrian (as discussed by at least ¶ 0058-0067 of Mun).
With respect to claim 8, Maeng modified supra teaches the driving assistance device according to claim 7, wherein the processor is configured to control to perform emergency braking when the possibility of the collision is equal to or higher than the second risk level (as discussed in detail above with respect to claim 7).
With respect to claim 9, Maeng teaches the driving assistance device according to claim 1; however, Maeng appears to lack a clear teaching as to whether the processor is configured to control to output an audio representing the location information of the pedestrian.
Mun teaches a processor of an analogous driving assistance device which is configured to control to output an audio representing the location information of the pedestrian [as discussed by at least ¶ 0008, 0013-0014, 0022-0024, 0026-0027, 0034, 0043-0045 & 0054-0067; for example, a processor determines a distance between a vehicle and a pedestrian based, in part, on output of a camera sensor, determines whether the distance is greater than a first distance but less than a second distance larger than the first distance or less than the first distance, and outputs a visual warning via a head up display (HUD) and a sound-based warning based on the distance with respect to the first distance and the second distance].
It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the driving assistance device of Maeng with the teachings of Mun such that the location information of the pedestrian further includes a distance between the vehicle and the pedestrian to beneficially warn a driver of the vehicle in the child protection zone to prevent an accident between the vehicle and the pedestrian (as discussed by at least ¶ 0058-0067 of Mun).
With respect to claim 17, Maeng modified supra teaches the method according to claim 14, further comprising: after the generating of the location information of the pedestrian indicating whether the pedestrian is located at the left or right of the vehicle, determining a possibility of collision based on a distance between the vehicle and the pedestrian and a driving speed of the vehicle; and controlling the vehicle based on the possibility of the collision (as discussed in detail above with respect to at least claims 5-8 and 14; also, see: MPEP 2111.04_II, as discussed in detail above with respect to claim 14).
With respect to claim 18, Maeng modified supra teaches the method according to claim 17, wherein the controlling of the vehicle based on the possibility of the collision comprises controlling a display device to display a collision risk warning when the possibility of collision is equal to or higher than a first risk level and less than a second risk level (as discussed in detail above with respect to at least claims 5-8, 14, and 17; also, see: MPEP 2111.04_II, as discussed in detail above with respect to claim 14).
With respect to claim 19, Maeng modified supra teaches the method according to claim 18, wherein the controlling of the vehicle based on the possibility of the collision comprises controlling to perform emergency braking when the possibility of collision is equal to or higher than the second risk level (as discussed in detail above with respect to at least claims 5-8, 14, 17, and 18; also, see: MPEP 2111.04_II, as discussed in detail above with respect to claim 14).
Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Maeng in view of Mun.
With respect to claim 10, Maeng teaches the driving assistance device according to claim 1; however, Maeng appears to lack a clear teaching as to whether the processor is configured to classify a type of the pedestrian recognized in the image data received from the surveillance camera and control the display device to display information related to the type of the pedestrian.
Mun teaches a processor of an analogous driving assistance device which is configured to classify a type of a pedestrian recognized in the image data received from a surveillance camera and control a display device to display information related to the type of the pedestrian [as discussed by at least ¶ 0008, 0013-0014, 0022-0024, 0026-0027, 0034, 0043-0045 & 0054-0067; for example, the processor determines a distance between a vehicle and a pedestrian based, in part, on output of a camera sensor, determines whether the distance is greater than a first distance but less than a second distance larger than the first distance (such that a “type of the pedestrian” is a “relatively far” pedestrian type) or less than the first distance (such that the “type of the pedestrian” is a “relatively close” pedestrian type), and displays a warning (e.g., “information related to the type of the pedestrian”) via a head up display (HUD) (and outputs a sound-based warning) based on the distance with respect to the first distance and the second distance].
It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the driving assistance device of Maeng with the teachings of Mun such that the location information of the pedestrian further includes a distance between the vehicle and the pedestrian to beneficially warn a driver of the vehicle in the child protection zone to prevent an accident between the vehicle and the pedestrian (as discussed by at least ¶ 0058-0067 of Mun).
With respect to claim 11, Maeng modified supra teaches the driving assistance device according to claim 10, wherein the processor is configured to control to output an audio representing the information related to the type of the pedestrian recognized in the image data received from the surveillance camera (as discussed in detail above with respect to claim 10).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is provided on the attached PTO-892 Notice of References Cited form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ZALESKAS whose telephone number is (571)272-5958. The examiner can normally be reached M-F 8:00 AM - 4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Logan Kraft can be reached at 571-270-5065. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN M ZALESKAS/Primary Examiner, Art Unit 3747